perm filename SYS3[TLK,DBL]1 blob sn#155267 filedate 1975-05-13 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00033 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00004 00002	.DEVICE XGP
C00006 00003	.PORTION TITLEPAGE
C00007 00004	.MACRO B ⊂ BEGIN VERBATIM GROUP ⊃
C00009 00005	.NSEC(BACKGROUND) 
C00018 00006	.SSEC(Internal Design of BEINGs)
C00028 00007	.SSEC(BEINGs Interacting)
C00035 00008	.SSEC(Aspects of BEINGs Systems)
C00044 00009	.NSEC(OVERVIEW)
C00052 00010	.SSEC(APPROACH via MATHEMATICAL DEVELOPMENT)
C00059 00011	.NSEC(IDEAS)
C00065 00012	.SSEC(A Proposed System)
C00088 00013	.SSEC(A Timetable for Action)
C00091 00014	.SSEC(Desired Behaviors)
C00095 00015	.SSEC(Comparison to Other Systems)
C00107 00016	.SSEC(Results)
C00111 00017	.NSEC(INTERNAL ACTIVITY)
C00128 00018	.SSEC(The ENVIRONMENT)
C00146 00019	.NSEC(INITIAL KNOWLEDGE and REPRESENTATION)
C00151 00020	.SSEC(Representation: Level 2)
C00160 00021	.SSEC(Initial Knowledge: Level 2)
C00168 00022	.SSEC(Representation: Level 3)
C00177 00023	We have dwelt on BEINGs so much, the reader is now entitled to hear about the
C00198 00024	.SSEC(Initial Knowledge: Level 3)
C00199 00025	.NSEC(COMMUNICATION)
C00212 00026	.NSEC(EXAMPLES)
C00219 00027	.SSEC(Example 3: Filling in the Examples parts of Objects)
C00232 00028	.SSEC(Example 4: Considering New Compositions of Operations)
C00243 00029	.SSEC(Example 5: Proving a Conjecture)
C00249 00030	.SSEC(Example 6: Formally Investigating an intuitively believed conjecture)
C00253 00031	.NSEC(BIBLIOGRAPHY)
C00272 00032	.SSEC(Articles)
C00280 00033	.PORTION CONTENTS
C00281 ENDMK
C⊗;
.DEVICE XGP
.!XGPCOMMANDS←"/TMAR=66/PMAR=2130/BMAR=4"
.FILL

.FONT 1 "BASL30"
.FONT 2 "BASB30"
.FONT 4  "BASI30"
.FONT 5  "NGR40"
.FONT 6  "NGR25"
.FONT 7  "NGR20"
.FONT 8  "GRFX35"
.TURN ON "↑α↓_π[]{"
.TURN ON "⊗" FOR "%"
.TURN ON "@" FOR "%"
.PAGE FRAME 63 HIGH 89 WIDE
.TITLE AREA HEADING LINES 1 TO 2
.AREA TEXT LINES 4 TO 61  CHARS 1 TO 89
.NARROW 7,7
.TITLE AREA FOOTING LINE  63
.COUNT PAGE PRINTING "1"
.TABBREAK
.!XGPLFTMAR←120
.AT "ffi" ⊂ IF THISFONT ≤ 4 THEN "≠"  ELSE "fαfαi" ⊃;
.AT "ffl" ⊂ IF THISFONT ≤ 4 THEN "α∞" ELSE "fαfαl" ⊃;
.AT "ff"  ⊂ IF THISFONT ≤ 4 THEN "≥"  ELSE "fαf" ⊃;
.AT "fi"  ⊂ IF THISFONT ≤ 4 THEN "α≡" ELSE "fαi" ⊃;
.AT "fl"  ⊂ IF THISFONT ≤ 4 THEN "∨"  ELSE "fαl" ⊃;

.PAGE←0
.NEXT PAGE
.EVERY FOOTING(,⊗7{DATE},)
.SECTION←" "
.SSECTION←" "
.PORTION TITLEPAGE
.BEGIN CENTER RETAIN


⊗5THEORY  FORMATION:⊗*

⊗6A Proposal for⊗*

⊗5A  SYSTEM  WHICH  CAN  DEVELOP
MATHEMATICAL  CONCEPTS  INTUITIVELY⊗*
.GROUP SKIP 12
⊗2Doug Lenat⊗*

Assisted  by  Avra Cohn

C. Green,  Adviser




STANFORD  UNIVERSITY
ARTIFICIAL  INTELLIGENCE  LABORATORY
 







Third  Sketch


⊗4Not for distribution⊗*
.END
.FILL
.NEXT PAGE
.MACRO B ⊂ BEGIN VERBATIM GROUP ⊃
.MACRO E ⊂ APART END ⊃
.MACRO B7 ⊂ BEGIN  WIDEN 7,7 SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓α"  GROUP ⊃
.MACRO B0 ⊂ BEGIN  WIDEN 0,7 SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓α"  GROUP ⊃
.MACRO FAD ⊂FILL ADJUST COMPACT DOUBLE SPACE; PREFACE 2 ⊃
.MACRO W(F) ⊂ SELECT F NOFILL SINGLE SPACE; PREFACE 0; WIDEN 7,7 ⊃
.WIDEN 7,7
.FILL
.TABBREAK
.EVERY HEADING(⊗7Math Theory Formation,{DATE},page {PAGE}⊗*)
.EVERY FOOTING(⊗7{SECTION},,{SSECTION}⊗*)
.SELECT 1


.MACRO NSEC(A)  ⊂  TURN ON "{∞→"   
.SSECNUM←0
.SSECTION←" "
.SECNUM←SECNUM+1
.SEND CONTENTS ⊂
@2{SECNUM}. A⊗* ∞.→ {PAGE}
.⊃
.TURN OFF "{∞→"   
.SECTION←"section:   A"
.NEXT PAGE
.ONCE CENTER TURN ON "{}"
@5↓_{SECNUM}. A_↓⊗*  
. ⊃



.MACRO SSEC(A)  ⊂  TURN ON "{∞→"   
.SSECNUM←SSECNUM+1
.SSECTION←"subsection:   A"
.SEND CONTENTS ⊂
@6        A⊗* ∞.→ {PAGE}
.⊃
.TURN OFF "{∞→"   
.ONCE TURN ON "{}"
@2↓_{SECNUM}.{SSECNUM}. A_↓⊗*  
. ⊃


.SECNUM←0
.NEXT PAGE
.PAGE←0
.TURN OFF "{"
.INDENT 0
.NARROW 7,7
.SELECT 1
.INSERT CONTENTS
.PORTION THESIS
.NOFILL
.PREFACE 0
.FAD
.TURN OFF "{∞→"   
.NSEC(BACKGROUND) 
.ONCE CENTER
@5 for readers unfamiliar with Beings⊗*

This section introduces the reader to the concept of organizing knowledge
as similarly-structured modules, called BEINGs. It may be skipped by those
familiar with that construction. For a more thorough treatment,
read either:
Section 4.6 of Green et al.,
⊗4Progress Report on Program-Understanding Systems⊗*, Memo AIM-240,
CS Report STAN-CS-74-444, Artificial Intelligence Laboratory, Stanford
University, August, 1974; or 
Lenat, ⊗4BEINGS: Knowledge as
Interacting Experts⊗*, January, 1975 (preprint available from the author).

.SSEC(BEINGs and Experts)

Consider an interdisciplinary enterprise, attempted by a community of human
experts who are specialists in -- and only in -- their own fields.  What modes of 
i}teractions will be productive?  The dominant paradigm might well settle into
⊗4questioning and answering⊗* each other.
Instead of a chairman, suppose the group adopts rules for
gaining the floor, what a speaker may do,  and how to resolve disputes.
When a topic is being considered, one or two
experts might recognize it and speak up. In the course of their exposition
they might need to call on other specialists. This might be by name, by specialty,
or simply by posing a new sub-question and hoping someone could recognize his own
relevance and volunteer a suggestion.
If the task is to construct something, then the
activities of the experts should not be strictly verbal.  Often, one will 
recognize his relevance to the current situation and ask to ⊗4do⊗* something:
clarify or modify or (rarely) create.

What would it mean to ⊗4simulate⊗* the above activity?  Imagine several little 
programs, each one modelling a different expert. What should each program,
called a ⊗4BEING⊗*, be capable of?  It must possess a corpus of specific facts and
strategies for its designated speciality. It must interact via questioning and
answering other BEINGs. Each BEING should be able to recognize when it is relevant.
It must set up and alter structures, just as the human specialists do.

Let us return to our meeting of human experts.
To be more concrete, suppose their task is to design and code a large
computer program: a concept formation system[2]. Experts who will be useful
include scientific programmers, non-programming psychologists,
system hackers, and management personnel.
Within each of these four major families will be many individual specialists.
What happens in the ensuing session?  When an expert participates, he will
either be aiding a colleague in some difficulty
or else transferring a tiny, customized 
bit of his expertise (facts about his field) into a programmed function
which can do something.  The final code reflects the members' knowledge,
in that sense.
Of course, experts within the same family will be able to communicate things
among themselves which are unintelligible to outsiders (e.g., when the hackers
start arguing about how to patch the system bugs that appear). 
Nevertheless, if we press him,
any of these specialists could transform his compressed jargon into a more universal
message (losing some information and some efficency), 
by giving examples and analogies, perhaps.

Suppose the project sponsor is quasi-active, submitting an initial specification
order for
the program, and then participating in the work as a (somewhat priveleged) member
of the team. This individual is the one who wants the final product, hence will be
called the ⊗4user⊗*.

How could BEINGs do all this? 
There would be some little program containing information
about ⊗7CONCEPT-FORMATION⊗*
(much more than would be used in writing any single concept formation program),
another BEING who knows
how to manage a group to
⊗7WRITE-PROGRAMS⊗*, and many lower-level specialists, for example 
⊗7INFO-OBTAINER, TEST, MODIFY-DATA-STRUCTURE, UNTIL-LOOP, 
VISUAL-PERCEPTION, AVOID-CONTRADICTION, PROPOSE-PLAUSIBLE-NAME⊗*.
Like the human specialists,
the BEINGs would contain far too much information, far too
inefficiently represented, to be able to say "we ourselves constitute
the desired program!"
They would have to discuss, and perhaps carry out, the concept formation task. They
would write specialized versions of themselves, programs which could do exactly what
the BEINGs did to carry out the task, no more nor less (although they would
hopefully take much less time, be more customized).
Some BEINGs 
(e.g., ⊗7TEST⊗*) may have several
distinct, streamlined fractions of themselves in the final program. BEINGs which
only aided other BEINGs (e.g., ⊗7PROPOSE-PLAUSIBLE-NAME⊗*)
may not have ⊗4any⊗* new correlates in the 
synthesized code.

An experimental system, PUP6, was designed and partially implemented. PUP6
synthesized a concept formation program (similar to Winston's), 
but the user, who is human,  must 
come up with certain specific answers to some of the BEINGs' critical queries.
A grammatical inference program and a  simple property list maintenance routine
were also generated. Only
a few new BEINGs had to be added to PUP6's orginal pool of 100
BEINGs in order to synthesize them, but communication
flexibility problems existed.
The choice of mathematics as the domain for the proposed system was made partially
to alleviate this problem.

.SKIP 2

.SSEC(Internal Design of BEINGs)

Now that we have developed our "external specifications" for what the 
BEINGs must do, how
exactly will they do it?  Have we merely pushed the problem of Artificial
Intelligence down into the coding of the BEINGs?  Perhaps not, for we still have
our analogy to the interacting experts. Let us carry it further, analyze
synergetic cooperation among the humans, then try to model that in our
internal design of BEINGs.

Viewing the group of experts as a single entity, what makes it
productive? The members must be very different in abilities, in order to handle
a complex task, yet similar in basic cognitive structure 
(in the anatomy of their minds) to
permit facile communications to flow.
For example, each psychologist knows how to direct a programmer to do
some of the things he can do, but the specific facts he has tucked away
under this category must be quite unique. Similarly, each expert may have
a set of strategies for
recognizing his own relevance to a
proposed question, but the ⊗4contents⊗* of that knowledge varies from
individual to individual.  The proposed hypothesis is that all the experts can be
said to consist of categorized information, where the set of 
categories is fairly standard, and indicates the ⊗4types⊗* of questions
any expert can be expected to answer. An expert is considered ⊗4equivalent⊗*
to his answers to several standard questions.
Each expert has the same mental "parts", it
is only the values stored in these parts, their contents,
which distinguish him as an individual. 
The particular set of questions he can deal with is fixed, depending on which family
the expert belongs to. There is much -- but not total -- overlap between what two
humans from different professions can meaningly answer.

Armed with this dubious view of intelligence, let us return to the design of
BEINGs. Each BEING shall have many parts, each possessing a name (a question it
can deal with) and a value (a procedure capable of answering that question).
Henceforth, "⊗4part⊗*" will be used in this technical sense.
When a BEING asks a question, it is really just one
part who is asking. In fact, it must be that the ⊗4value⊗* subpart of some part
can't answer ⊗4his⊗* question without further assistance. He may not know
enough to call on specific other
BEINGs (in which case he broadcasts his plea, lettting anyone 
respond who feels relevant), but
he should ⊗4always⊗* specify what BEING ⊗4part⊗* the question should be answered by.
By analogy with the experts, each BEING in the same family 
will have the same fixed 
set of types of parts (will answer the same kinds of queries), and this uniformity 
should permit painless intercommunication
between specialists in the same profession.  Many of these parts will be common to
more than one family (e.g., "How long-winded are you").

Since the paradigm of
the meeting is questioning and anwwering, the names of the parts should
cover all the types of questions one expert wants to ask another. Each part of
each BEING will have implicit access to this list: it may ask only these
types of questions. Each BEING should ⊗4not⊗* have access to the list of all
BEINGs in the system: requests should be phrased in terms of what is wanted;
rarely is the name of the answerer specified in advance.
(By analogy: the human speaker is not aware of precisely who is in the room;
when he feels inadequate, he asks for help and hopes someone responds).

Once again: the concept of a system of BEINGs is that many entities coexist, 
clumped into a few major family groupings. Each individual BEING has
a complex structure, but that structure does not vary much from BEING to BEING;
it does not vary at all among BEINGs of the same family.
This idea has analogues in many fields: transactional analysis in
psychology, anatomy in medicine, modular design in architechture.

To carry out these ideas, we build a system of BEINGs, a modular program
which will interact with a human user and go through the same conversations,
and arrive at the same end products, that our human experts would.
Recasting the idea into operational terms, we arrive at this procedure for
writing a pool of BEINGs: 


(1) Study the task which the pool is to do. See
what kinds of questions are asked by simulated experts, and notice how the experts
divide into a few major families {f↓i}.
The total number of families is important: if there are too many, it is hard for
specialized communication to occur; if too few families, many BEINGs will be forced
to answer questions they consider irrelevant.

(2) Distill the corpus of collected communications into
a core of simple questions, Q↓f, for each family f,
in such a way that each inter-expert question or transfer
of control can be rephrased in terms of these Q's.
The sizes of the sets Q are very important.
If a Q is huge, addition of new
BEINGs will demand either great effort or great intelligence (an example of a
system like this is ACTORS). If a Q is too small, all the non-uniformity is simply
pushed down into the values of one or two general 
catchall questions (all first-order
logical languages do this). 

(3) List all the BEINGs who will be
present in the pool, by family, and fill in their parts. 
The time to encode knowledge into many simple representation schemes is
proportional to the square of (occasionally exponential in) 
the amount of interrelated knowledge (e.g., consider the frame problem).
The filling in of a new BEING is  ⊗4independent⊗* of
the number of BEINGs already in the pool, because BEINGs can communicate
via nondeterministic goal mechanisms, and not have to know the names of the BEINGs
who will answer their queries. This filling in is  of course linear in the number of
questions a BEING must answer (e.g., the maximum size of any Q↓f).

(4) The human user interacts with the completed
BEING community, until the desired task is complete.

.SKIP 2

.SSEC(BEINGs Interacting)

We now have some idea of what the internal structure of BEINGs are, but how do they
meet the external specifications we set for them: the reproduction of the human
experts' conversations and finished products?  The question is that of control in
the system, and it splits into two parts: how does the "right" BEING gain control,
and what does he "do" when he gets control?

The scenario, in PUP6, runs as follows. 
When control is about to be passed, the relinquisher
will specify a set of possible recipients (successors). 
Two extreme but common cases are
a singleton (he knows who should go next) and the set of all BEINGs (he has no
idea whatsoever).  Each possible candidate for control is asked if he is relevant
to the current situation.(If a goal is currently set forth, his EFFECTS part will
be asked if this guy can bring about that goal; if an unintelligible piece of
information is sitting around somewhere, his IDENTIFY part will be asked if this
guy can recognize that piece of info.) If more than one BEING replies that it feels
relevant, then their WHEN components are asked ⊗4how⊗* relevant they are right now.
If a tie still exists, their COMPLEXITY components are asked to decide which will be
faster, surer, lead to auxilliary desired effects, etc.
There will always be ⊗4some⊗* BEING who will take over;
the general management
types of BEINGs are always able  -- but reluctant  -- to do so. 

Once in control, a BEING B picks one of its parts, evaluates it, and repeats this
process until it decides to relinquish control. At that time, it puts forth a
list of possible successors.
For example, the ARGS
part might be first; if it asks for some arguments which no BEING has 
supplied, then the whole BEING might decide to fail. Some  parts, when evaluated,
might create a new BEING, might ask questions which require this whole process
to repeat recursively, etc. 
This "asking" really means broadcasting a request to one or two parts of
some other BEINGs (often every other BEING); 
for example "Is there a known fast way of gronking toves?" would
be asked as a search for a BEING whose COMPLEXITY indicated speed, and whose
EFFECTS part contained a production with a template matching "gronking toves".
A list of the responders would be returned. 
The questioner might pose some
new questions directly to these BEINGs, might turn control over to them directly,
might simply want to know that some exist, etc.

How does each BEING decide which parts to evaluate, and in which order,
once it gains control?
For our humans, the answer is: a combination of individual intelligence, the
training inherent in the family of expert you are, and universally accepted
constraints (common sense). For our BEINGs, we postulate a part called
ORDERING; each BEING consults its own Ordering part, the Ordering part
for its Family, and the universal Ordering Archetypical BEING. They 
partially constrain what part must be evaluated before what other part.
This appears to be difficult or tedious for whoever writes
BEINGs, since it might vary from BEING to BEING. In fact,
it rarely does vary, and most of the necessary constraints can be learned by
the system as it runs, and inserted into the proper slots.

Reexamine the question:
"What parts are evaluated, and in what order, when a particular
BEING gains control?" This decision depends primarily on the ⊗4types⊗* of parts
present in the BEING, not on their ⊗4values⊗*.  But every BEING in a family 
has the same
anatomy, so one single algorithm,
located in that family's Ordering part,
can assemble any BEING's parts into an executable
LISP function. Moreover, this assemby can be done when the system is first
loaded (or when a new BEING is first created), and need only be redone for a
BEING when the values of its parts change. Such changes are rare: experts are
not often open-minded. Thus the family's BEINGs can be compiled into executable
LISP functions.

.SKIP 2

.SSEC(Aspects of BEINGs Systems)

It would be aesthetically pleasing to postulate that the only entities which exist
are BEINGs. Since this would require BEINGs' parts to be BEINGs, hence have parts of
their own, etc., an explosive recurrence would occur.  To avoid this, we set a slightly
different tack. Suppose that each part which has the same name must also have the same
internal structure. The format for part P is stored in the Representation part of the
archetypical BEING named P. The only allowable formats are the following:
an opaque executable expression, a pointer to some BEING or some specific part of a
BEING, a list of executable forms and pointers to BEINGs. Notice that there are
only three, and that they are all quite simple.

We shall demand that BEINGs write only new BEINGs, never any new functions,
production systems, etc. The humans often succeeded by distilling a tiny
specialization of their expertise; the BEINGs work similarly. 
In the process of discovery, this splitting occurs usually when some subpart is more
interesting than the whole.
In the process of automatic code-writing, this creation occurred when a BEING knew
how to write a fast, short, specialized, streamlined version of itself which was
capable of executing some specific subprocess used in the final "target" concept
formation program.


To clarify what BEINGs are and are not, they are contrasted with some other ideas. 
BEINGs linearly but
inefficently subsume such constructions as demons, functions, and assertions in an
associative data base
(in the earlier papers, brief demonstrations were provided).
FRAMES are sufficiently amorphous to subsume BEINGs. In philosophy,
FRAMES are meant to model perception, and intentionally rely on implicit
default values; BEINGs avoid making decisions without full awareness of the 
justification. 
This is also the difference  between HACKER and PUP6, the first experimental pool of
BEINGs. 
Since PUP6 wrote structured programs, it should be distinguished from macro
expansion. Macro procedures expand mechanically:
@2expand(sequence   m↓1  m↓2) = (sequence  
expand(m↓1)  expand(m↓2)))⊗*. BEINGs could use
information gleaned during expansion of m↓1 to improve the way m↓2 was handled.
ACTORs, unlike BEINGs, have no fixed structure imposed, and do not broadcast
their messages (they  specify who gets each message, by name, to a bureaucracy).


The performance of the BEINGs representation itself in PUP6 is mixed.
Two advantages were hoped for by using a uniform set of BEING parts.
Addition of new BEINGs to the pool was not easy (for untrained users)
but communication among
BEINGs ⊗4was⊗* easy (fast, natural). Two
advantages were hoped for by keeping the BEINGs highly structured.
The interactions (especially with the user) were
brittle, but
the complex tasks put to the system ⊗4were⊗* successfully completed.

The crippling problems are seen to be with user-system communication,
not with the BEINGs ideas themselves.
Sophisticated, bug-free programs ⊗4were⊗* generated, after hours of fairly high
level dialogue with an active user, after tens of thousands of messages passed
among the BEINGs.
Part of this success is attributed to distributing
the responsibility for writing code and for recognizing relevance, to a hundred
entities, rather than having a few central monitors worry about everything.
The standardization of parts made filling in the BEINGs' contents fairly painless,
both for the author and for BEINGs who had to write new BEINGs.

All this suggests two possible continuations, both of which are underway here
at the Stanford AI Lab. One is to
rethink the communication problems,
and develop a new system for the
concept formation program synthesis task. The earliest programs by our Automatic
Programming group had the goal "synthesize the target program somehow";
the later PUP1-6 research insisted on "getting the
target program by going through one ⊗4proper⊗* sequence of reasoning steps"; 
the group's proposed continuation wants several
untrained users to  succeed by many different "proper" routes.

This document is proposes an alternative direction for research effort.
This other way of continuing
is to find a task where BEINGs are well-suited, where
the problems encountered in PUP6 won't recur. What ⊗4are⊗* BEINGs good for?
The idea of a fixed set of parts (which distinguishes them from ACTORs) is
useful if the mass of knowledge is 
too huge for one individual to keep "on top" of.
It then should be organized in a
very uniform way (to simplify preparing it for storage), 
yet it must also be highly structured
(to speed up retrieval). 
A final ideal would be to find a domain where slightly-trained users could work
naturally, without (them ⊗4or⊗* us) having to battle the staggering complexities
of natural language handling. 
For these reasons, the author has chosen ⊗4fundamental mathematics⊗* 
as a task domain.
BEINGs are big and slow, but valuable for organizing knowledge in ways 
meaningful to how it will be used. In this proposed system, BEINGs will be one
-- but not the only -- 
internal mechanism for representing and manipulating knowledge.

.NSEC(OVERVIEW)

.SSEC(SUMMARY)

The methods of mathematical creativity are being studied.  
A taxonomy of theory formation and of elementary mathematics is developed, then
embodied in a programmed system able to do simple research, form interesting
"mini-theories" and study their consequences.

	The fundamental organizational unit is the ⊗4BEING⊗*, abbreviated β. This is
merely a collection of knowledge about a certain topic, organized as the answers to a
fixed set of a couple dozen questions about that topic. In answering a query, one
individual piece of knowledge (a part of a Being) might have to call on several others.
The control is implicit in the collection of Beings which exist: each β has a Recognition
part, the answer to "Are you relevant to this situation...?", whose task is to determine
when to seize control and when to yield it.

	One unusual feature of the system will be a powerful "intuitive" ability
to analogize with part of the real world. The system may perform experiments on
this simulated Nature, and receive valid results, but the actual code which 
represents the environment is ⊗4opaque⊗*. 
⊗7For example, a model of a seesaw might exist, and the
system could play around at varying the weights on each side and their distance from
the fulcrum, and the seesaw function would explain which side sank and how fast.
This might be useful in getting an intuition about multiplication, substitution,
or symmetry.⊗*

	The initial knowledge in the system will consist of (i) specific facts about
mathematics, reasoning, programming, and communication, (ii) strategies
for filling out parts of incomplete β's, (iii) opaque functions
which simulate parts of Nature, and (iv) opaque judgment criteria
for aesthetics, interest, utility, complexity, etc.
The specific facts are
organized into 4 families of Beings; each family initially has about 35 β's, and
each β has about 20 parts. The families are: 
Static (eg, sets), Active (eg, relation),
Static-Meta (eg, analogy), and Active-Meta (eg, prove).
For uniformity, the strategies form a fifth family of β's, called Archetypical β's.
⊗7A strategy BEING is simply a collection of facts for 
dealing with a particular
type of BEING part; the "Examples" β contains suggestions for filling in the
Examples part of any BEING.⊗*

	The quantity of this corpus appears large (about 3000 β-parts to
encode, each as a little LISP program), and it is of some interest to hope that
the very same techniques which lead to discovering new mathematical knowledge
later on might be able to "grow" this knowledge base from a much smaller core --
say a collection of 100 β's with only a few parts filled in for each.
The first activity of such a system, then, would be ⊗4contemplative⊗*: the
interaction with the user would be minimal.
General strategies would interact with observations, and
new  concrete facts about its world would emerge, along with some new
specific tactics.
The system would also combine its intuitions to form plausible conjectures and,
where all terms have formal definitions, try to prove them.
The activities in this period are 
universal, not limited to any single
domain of mathematics.  

The user is considered slow and dangerously contradicatory, hence not a good channel
to obtain data in general.
But as the known information swells, the need for guidance also grows. 
At some point the
system may simply be swamped by a multitude of equally-mediocre alternatives to
investigate. 
It will then (abeit reluctantly) request  direction from
a human user, in what is to be an ⊗4assimilative⊗* phase. 
These teachings should be the
core definitions of a specific field, and of course should be based on what is 
already mastered. The first experiences could be in set theory, Boolean algebra,
abstract algebra, logic, or
arithmetic.  This will probably be the level finally attained by the actual system.

One higher mode of interaction is conceivable: that of a colleague in research.
In conjunction with a
human adviser, the system would propose and explore interesting new relationships,
decide which creations to name, explore the intuitive meanings of statements, etc.
Hopefully, the reader has balked, complaining that this sounds just like the earlier
phases. In fact, the system will not ring a bell and suddenly switch its activities;
it has no way of knowing that its discovery of PLUS is not new to Mankind.  
The driving and pruning forces in all phases are the same: use 
aesthetics and utility judgments to fill out parts of incomplete BEINGs.
⊗7If the guidance of the human turns out to be
important, however, then it will come as no surprise if
the flavor of the interactions changes as the system enters a realm unfamiliar
to the user.⊗*

.SSEC(APPROACH via MATHEMATICAL DEVELOPMENT)

A Metaphor: Mathematics Theory Formation as Path-Finding

The central idea is that mathematical creativity can be replicated by careful
utilization and extension of a large corpus of knowledge, 
guided by judgmental criteria.

Consider a foundation of mathematical knowledge:

.B0
		⊂ααααααααααααααααααααααααααααααα⊃
		~	   Logic, Proof,	~
		~	 Naive Set Theory	~
		%ααααααααααααααααααααααααααααααα$
.END

Using 
abstraction from reality, analogy with existing theories,
the postulational method, and problem-solving techniques,
the researcher compounds the given concepts into new constructs, relations,
theorems, definitions, axioms, etc. 
The staggering variety of alternatives to investigate 
includes all known mathematics, much trivia, countless deadends, and so on.
The only "successful" paths near the core 
are the narrow ribbons of known mathematics 
(perhaps with a few undiscovered other slivers).

How should we walk through this immense space, with any hope of following the
few, slender branches of already-established mathematics (or some equally successful
new fields)? We must do hill-climbing; as new concepts are formed, decide how 
promising they are, always explore the currently most-promising new concept. The
evaluation function is quite nontrivial, and this research may be viewed as an
attempt to study and explain and duplicate the judgmental criteria people employ:
aesthetic beauty$$ Simplicity, harmony, unity *, utility$$ Economy 
of representation,
usefulness elsewhere *, richness$$ Ties to other concepts, 
analogies, good intuitive models *.

In the above picture, one can imagine narrow filaments of "known", interesting
mathematics emanating from the central core, winding and twisting, often intersecting.
Our task is to walk away from the central core concepts, yet follow  one of
these filaments.$$ Or an undiscovered filament which is equally interesting *

The main point to make here is that with proper evaluation criteria, we convert the
flat picture   (wiggly lines coming from a central core) to
a breath-taking relief map:
the known lines of development become
mountain ranges, soaring above the vast flat plains 
of trivia and inconsistency
below. Occasionally an isolated
hill is discovered;$$ E.g.,
Knuth's ↓_Surreal Numbers_↓ * perhaps
whole ranges lie undiscovered for long periods of time.$$ E.g.,
non-Euclidean geometries * Certainly the terrrain far from the initial core is
not at all explored.  
Intuition is like vision, letting the explorer observe a distant mountain,
long before he has conquered its intricate, specific challenges.

If the criteria for evaluating interestingness and promise are good enough, then
it should be straightforward to simply push off in any direction, locate some
nearby peak, and follow the mountain range along (duplicating the development in
some field). In fact, by intentionally pushing off in apparently barren directions,
new ranges might be enountered. If the criteria are "correct", then ⊗4any⊗* new
discovery the system makes and likes will necessarily be interesting to humans.

If, as is more likely, the criteria are deficient, this too will tell us much. Before
beginning, we shall strive to include all the obvious factors which enter into
judgmental decisions, with appropriate weights, etc. If the criteria fail, then
we can analyze that failure and learn about one unobvious factor in evaluating
success in mathematics (and in any creative endeavor). After modifying the criteria
to include this new factor, we can proceed again.

Thus we expect to learn new, nonobvious facts about how mathematicians judge the
success of their efforts. In the unlikely event that this fails, it will be because
we have accounted for all these features, and have developed a useful tool for
exploring mathematical theories.
.NSEC(IDEAS)

Throughout all of science, one of the most important issues is that of
theory formation: how to extend, when to define, what to examine next,
how to recognize potentially related concepts, how to tie such concepts
together productively,
how to use intuition, how to choose, when to give up and try another
approach.  These questions are difficult to answer precisely, even in a
single domain.  Problems with natural language, with experimental
apparatus, and
with subjects which are complex yet poorly-structured,
all becloud the answers.  By restricting the domain of attention to 
⊗4mathematics⊗*, we hope to avoid these difficulties.

A ⊗4solution⊗* to this task would mean successfully
accounting for the ⊗4driving⊗* and the ⊗4pruning⊗* forces which
result in interesting
mathematical theories being developed. Success
could be measured in operational terms, by applying these forces to
various domains of mathematics, and comparing the results to what is
already known in those fields.

The ideas explored here are that:

(i) These forces are (in decreasing order of importance) aesthetics/interestingness,
intuition, utility, analogy, inductive inference (based on empirical evidence), 
and deductive inference (formal methods).

(ii) Each of these forces is useful both in generating new conjectures, and
in assessing their acceptability.

(iii) If the essence of these ideas can be factored out into an explicit set
(of rules, predicates, BEINGs, programs...), then they can be used to
develop almost any branch of mathematics, at almost any level.

(iv) A protocol was taken, and indicates that the researcher must have a very
good set of strategies, organize them carefully, and use them wisely
to avoid getting bogged down in barren
pursuits. Some of this wisdom must pertain to precisely what is to be
remembered/recorded: a surfeit is bewildering, a shortage dangerous.

(v) Each mathematical concept should be represented in several ways, 
including declarative, operational, exemplary (especially boundary
examples), and intuitive.

(vi) A large foundation of intuition, spanning several mathematical and real-
world concepts, is prerequisite to sophisticated behavior in ⊗4any⊗*
branch of mathematics.  It is not "cheating" to demand some intuitive
concept of sets, before studying number theory, nor to demand some
intuitive understanding of counting before studying set theory, provided the
intuition is ⊗4opaque⊗* (can be used but not inspected in detail)
and fallible.
The more serious attack on the reliance upon divinely-provided 
intuitive abilities is
that the creators might stack the deck: might contrive just the right intuitions to
drive the worker toward making the "proper" discoveries.  The rebuttal is two-pronged:
first, one must assume the integrity of the creators; they must strive not
to anticipate the precise uses that will be made of the intuition functions. Second,
regardless of how contrived it was, if a small set of intutition models were found
which are sufficient to drive a researcher to disocver a significant part of
mathematics, that alone would be an interesting discovery 
(educators would like to ensure
that children understood this core of images, for example).

(vii) The vast amount of necessary initial knowledge can be 
generated from a much smaller
core of intuition and definite facts, using the same collection of
strategies and wisdom which also drive the discovery and the development
(those outlined above in (i)-(iv)).

(viii) The more basic the initial core concepts, the more chance there is that the 
research will go off in directions different from humans, the more
chance it will be a waste of time, and the more valid the test of the search-pruning
forces.
.SKIP 3
.SSEC(A Proposed System)

Let us consider now what would be the characteristics of
a man-machine system which could be used  experimentally. 
The system would have about a hundred packets of information, each of which deals
with a small concept related to the foundations of mathematics, techniques for
research, etc.  Inside each packet is an organized cluster of
specific facts, intuition,   strategies, knowledge of how to
use the facts and the strategies, and an ability to estimate
the interest of the packet's topic and its sureity.  
Each such knowledge module will be called a ⊗4BEING⊗*, abbreviated β, and each
unit of its contents will be called a ⊗4part⊗*.

The system would think to itself awhile, producing primarily intuitive "universal"
relationships. Since these activities don't utilize any alien authority, 
this ⊗4contemplative⊗* stage can be programmed and run
even before any natural communication system is designed.
The overall control flow would be a series of Complete(P,B) calls, in which some
part P of some β B would be worked on, filled out more, etc. 
The driving/pruning forces would each time select the next (P,B) pair.
During the course of such
completions, new β's might be called for (split off rich parts of already-exisiting
β's).  One huge savings for the creators would be that the system should be able to
fill in examples  of each β itself; much of this phase will in fact be doing just
that. Many mini-theorems will arise as a result of filling out examples of
Relations, Compositions, Conjectures, Theorems, etc.

Eventually, the system's model of the user would indicate that his
guidance, though slow and errorful, was preferable to continue this wandering
development. The system might ask for specific information relating to the
concepts it had discovered the best intutive "theorems" about, or might simply
request tutoring in any domain of the user's choosing.

The human user's
first major task would be to input a body of concepts about a specific domain
(for each concept, he should provide definitions, examples, intuitive pictures,
etc.) Then the system will begin exploring that domain, using its
(hopefully universal) body of mathematical strategies.  Occasionally, the
user may interact with the system.  Occasionally, the system may do
something interesting.  The following ideas are fairly concrete, dealing
with such a programmed, runnable system.

(i) The system, if containing modules for each driving and pruning force,
should operate even if some of these forces have been
"turned off,"  so long as ⊗4any⊗* of the modules remain
enabled. ⊗7 For example, if all but
the formal manipulation knowledge is removed, the
system should still grind out
(simple) proofs. If all but the analogy and intuition modules are excised,
some plausible (but uncertain) conjectures should still be produced and
built upon.   ⊗*
If these forces are buried deep in an Environment, they should be tunable
(by the creators) to almost negligibility, so the same experiments can still be
carried out.
The converse situation should also hold: although still functional with any module
unplugged, the performance ⊗4should⊗* be noticably degraded. 
That is, while not indispensible, each module should nontrivially help the others.
⊗7For example,
the job of proving an assertion
should be made much easier by the presence of intuitive understanding. If
a constructive proof is available, the necessary materials will already be
sketched out for the formal methods to build upon.⊗*

(ii) The human working with the system has several roles. First, he must
determine what domain of mathematics is to be examined, what is to be
assumed as known, etc.  Second, he must guide the system, by suggestion
or discouragement, to avoid (probably) fruitless investigations, and to
concentrate on desirable topics.
Third, he might be called on as absolute authority, to provide a needed
fact (e.g., a theorem from another domain) at just the right time.
Ultimately, he might become a co-researcher, when and if the system is
operating in a domain unknown to him.

(iii) In what sense will the system justify what it believes? The aim is
not to build a theorem-prover, yet the leap from "very probable" to 
"certain" is not ignorable.  Many statements are infinitely probable yet
silly (e.g., ⊗7given a number x, choose numbers y at random. The probability
that y > x is unity⊗*.)  Some sophisticated studies into this problem have
been done [Pietarinen] and may prove usable. 
.BEGIN SELECT 6 SINGLE SPACE
	The mechanism for belief in each fact, its certainty, should be
descriptive (a collection of supporting reasons) with a vector of numerical
probabiities (estimated for each factor) attached. These numbers
would be computed at creation of this
entity, recomputed only as required.   The most fundamental entities may
have ⊗4only⊗* numerical weights. 
If the weight of any entity changes, no "chasing
around" need be done. Contradictions are not
catastrophic: they simply indicate that the reasons supporting each of the
conflicting ideas should be reexamined, their intuitive and formal
justifications scrutinized, until the "sum" of the ultimate beliefs in
the contradictory statements falls below unity, and until some intuitive
visualization of the situation is accepted.
If this never happens, then a problem really exists here, and might
have to be assimilated as an exception to some rule, might decrease the
reliability placed on certain facts and methods, etc.
This algorithm, whatever its details, should be embedded implicitly in the
control ⊗4environment⊗*; the system should not have the power to inspect or
modify it.
.END

(iv) The communication between the system and the human should be in a
language suited to the particular role he is playing. Thus there can be 
some formal language, some traditional math notation language, some
pictorial language, etc.  Although efficiency will demand a fixed
syntax and semantics for each of these, a trial protocol has indicated
that the typical form of mathematical communication is well defined;
(i.e., it should be feasible to construct formal languages in this
domain, for which the user will not need much prior training).

(v) The following diagram indicates the (traditional) logical progression
of domains in mathematics, and the system should be able to start almost
anywhere and move forward (following the arrows).
Movement backward might be possible, and in some cases may be quite smooth.
This is because the psychological progression does not mirror the logical
progression.
.B0


Elementary Logic  ααα→  Theorem-Proving  ααααααααααααααα⊃
    ↑							~
    ~							~
    ~							~
    εαααααααα→  Geometry  ααα→  Topology		~
    ~		    ~		↑      ~		~
    ~		    ~		~      ~		~
    ~		    ↓	        ~      ↓ 		~
    ~      Analytic Geometry    ↓   Algebraic Topology	~
    ~		  ↑Measure Theory      ↑		~
    ~             ~↑ 		       ~		~
    ↓	          ~~ 		       ~		~
Boolean Algebra  αββαα→  Abstract Algebra 		~
    ↑             ~~      ~				↓
.ONCE TURN ON "α"
    ~	          ~↓      ~		  Program Verification
    ~	    Analysis      ↓				↑
    ~		   ↑     Concrete Algebra		~
    ~              ~      ↑				~
    ~		   ~      ~				~
    ↓		   ~      ~				~
Set Theory  ααα→  Arithmetic  ααα→  Number Theory	~
		      ~					~
		      ~					~
		      ↓					~
		Combinatorics  ←ααα→  Graph Theory  αααα$
.E

(vi) Advancement in field x
should be much swifter if field y is mastered already, regardless which
fields x and y represent.  An analogous statement applies to progress within any 
field, when other parts of it have been developed. 

(vii) To start in a particular field, there must be much 
intuition, and some definite facts, about each preceding ("⊗8αααα→⊗*"
in the diagram above)
domain of mathematics.  For this reason, the system is expected to start with
logic, set theory, Boolean algebra, or arithmetic, and move from one of these 
to another, or move along the arrows in the diagram. 
The progression to number theory is the tentative choice for an advanced thrust.

(viii) Since the precise strategies are so crucial, it might be
advantageous to allow them to evolve. This includes changing the system's
notion of interestingness as its experience grows. Such an ability is so slippery
that the system is tentatively planned ⊗4not⊗* to have much freedom here.
Intuitions and strategies may be inspected and changed, 
just like specific facts, but
the notions of how to judge interestingness, belief, safety, difficulty, etc. 
(plus all the algorithms for split-msecond estimating and updating these)
are fixed for
the system by its creator. If they are unsatisfactory, he must retune them.

(ix) It seems desirable to use a single representation for all of these: 
specific knowledge about objects (e.g., bag) and
operators (e.g, union); general knowledge about meta-objects (e.g., conjectures)
and meta-activities (e.g., prove, communicate).
A fifth category is the strategic information dealing with how to recognize,
interpret, fill in, modify, check, and work with the other types of knowledge.
A family of BEINGs will be designed for each of the five
knowledge categories; each family
will have its own set of BEING parts. The parts of a specific knowledge BEING
will relate all the various kinds of things one can know about a single
mathematical concept (Usage, Boundary examples, Name,...); the parts of a
strategy BEING S will contain guides for filling in part S of each 
specific information BEING.
This last statement was tricky: a strategy β is 
simply an expert on dealing with one
particular part of a group of other BEINGs. For this reason, we also call it an
⊗4archetypical⊗* β, and its name must coincide with the name of a part; occasionally,
it will be named B.π, which means that it is useful for dealing with part π of any
BEING in the B family of BEINGs.

(x)  It is ⊗4not⊗* desirable to have only one representation of knowledge in a system;
while multiple knowledge formalisms create interfacing difficulties, the advantages
of expressing a piece of information in the best-suited way is considered worth the
headaches of interfacing.   BEINGs are fine for storing structured information in an
accessable manner, but much of the system will be inaccessable. For example,
the "fixed" wisdom of how to home in on the most
relevant strategies at any given time can be stored in a more efficent
manner, completely opaque to the system, as implicit 
compiled meta-strategic Environment functions.

(xi) Control in the system will involve zeroing in on a relevant part of a relevant
BEING, then using strategies dealing with such a part to work on it. The proposed
control algorithm is tedious but not complicated; it is presented in section  4,
in subsection 4.1, ⊗4the Environment⊗*.
⊗7Often the family of BEINGs is determined, then the subfamily, then the
specific BEING. The group of parts relevant is determined, followed by the
specific part. The specializing choice is typically made by the currently
available group. For example, after determining that one of the Proof
BEINGs is relevant, the system lets all the proof BEINGs fight it out and
decide which of them is most relevant. This diffusion of decisive power is
common in human activites but surprisingly rare in computer programs.⊗*
Let us suppose that somehow we have selected that part p of BEING b must be
filled in.
Strategies associated with that kind
of part for that family of BEING will then be run, and will attempt to
fill in the part p. This may result in new discoveries, in new BEINGs being
created, in failure, in fully filling in p, or in partially filling it in
but stopping because some new fact was encountered which might be more
interesting.

(xii) The basic mechanism is thus the filling in and the running of parts of BEINGs.
But BEING parts are generally procedural knowledge, so this task really means
automatic code synthesis. Knowledge is stored with an idea toward future usage,
both in where it is placed and how it is recorded.
⊗7This is the logical continuation of the usage-oriented storage originating in
PUP1 and developed into BEINGs in PUP6, both described in [Green et al].⊗*

(xiii) The human user might be modelled within the formalism, by a
single β named USER which models a person ⊗7(including his extremes
of absolute authority and dismal self-contradiction, of adaptability and fallability,
of creativity and impatience, etc.)
Its WORTH part could indicate the costs and 
desirability of  querying the user in any given situation.⊗*
Actual translations could be effected by efficient environmental 
functions called by this β, or by a subfamily of Communication β's.

(xiv) A balanced distribution of intelligence ought to exist: related β parts which
are rarely accessed separately should be coalesced, and any single type of β part
used very heavily should be split. Notice that this theme has two quite different
realizations. During run time, this policy would refer to the contents held in
existing BEINGs' parts (e.g., the Structure part exists only to indicate what to
do about a β's part getting too big). Before the system is actually created, however,
this policy means altering the proposed anatomy of BEINGs ⊗7(proclaiming that some
family has to have this extra part present, that these three parts can be replaced
(in all BEINGs of a given family) by the following more general part, etc.)⊗*
The runtime restructurings occur based on knowledge recently created by the
system; the planning-stage restructurings are now being based on results from
hand simulations of the system.  Once again: during runtime, the set of parts that
any specific BEING can ever have is fixed by the family (profession) to which he
belongs.

.SELECT 1
.SSEC(A Timetable for Action)

1. Codify the necessary core of initial knowledge (facts and the wisdom to employ them).
⊗7Reality: See Given Knowledge, as presented in the Second Sketch, 
completed 12/10/74.⊗*

.TURN ON "{"

2. Formulate a sufficient set of new ideas, design decisions, and intuitive assumptions
to make the task meaninful and feasable.
⊗7Reality: firmed up by February 1, 1975. This is the essence of this document!⊗*

3. Use these ideas to represent the core knowledge of mathematics collected in (1),
in a concrete, simulated system.
⊗7Reality: the current version of Given Knowledge casts this into the β-family format.
Hand-simulations done during March, 1975, with this "paper" system.⊗*

4. Implement a realization of this system as a  computer program.
⊗7Reality: Now under way: {DATE}.⊗*

5. Debug and run the system. Add the natural language abilities gradually, as needed.
⊗7Reality: Scheduled for May to November of 1975.⊗*

.TURN OFF "{"

6. Analyze the results obtained from this system, with an eye toward: overall
feasability of automating creative mathematical discovery; adequacy of the initial
core of knowledge; adequacy of the ideas, design decisions, implementation
details, and theoretical assumptions.  Use the results to improve the system;
when "adequate,"  forge ahead as far as possible into as many domains as possible,
then reanalyze. ⊗7Reality: 
the 5↔6 cycle will terminate in the Winter of 1976.⊗*

.SSEC(Desired Behaviors)

The conception of the project is to build a system that learns and does
mathematics by creating and maintaining and using "good" internal
organizations of its knowledge.
The sorts of behavior envisioned (evidences of successful assimilation and of
intuitive behavior) are:


Discovering, accepting, filing, and searching for new information in a 
useful, connected manner,

Giving quick intuitive judgements 
(short of formal proof or disproof) about the truth
of conjectures,

Proposing reasonable (not necessarily true) new conjectures.
In fact: proposing at least one conjecture which is both intuitively clear yet false,

Having a dynamic sense of interestingness, worth-of-pursuing, aesthetic promise,

Weighing evidence for or against claims,

Assessing the difficulty of problems,

Extending and generalizing from examples (inductive inference),

Giving constructive plans for proof/solution to the extent 
that they are present in the intuition,

Adapting dynamicly -- "learning" -- readjusting old schemata, shifting, reorganizing,

Effectively mobilizing facts and techniques by using analogy, relevant
features, etc.

Exercising a notion of the relatedness of propositions ⊗4apart⊗* from
the logical notions (probable implication, co-dependence on something
else, support for, interdependence...): to give a convincing argument,
to explain meaningfully, to be convinced or explained to,

Having, maintaining, using, and discovering several organizations 
over the same knowledge for different uses,

Understanding of math as a logical whole with many interconnections;
ability to take different starting points,

Reflecting, clarifying, and interrelating its own content (to name things,
to isolate things, to reorganize itself),

Inventing mini-theories on a topic, to do small-scale research by tying
fragments and observations together into a coherent whole; generalizing
from the results of working on a few problems.

To be taught by various techniques, by users who have had 
about an hour's preparation
in the content and the format of the possible dialogues. 

To discover some theorem or relationship or new interesting construct previously
unknown ⊗4to the user⊗*. 
⊗7(Note the qualification; this might involve training children
to use the system and having them play with elementary concepts; it might mean 
letting a college student unfamiliar with abstract algebra play in that domain;
 etc.)⊗*

.SKIP 3

.SSEC(Comparison to Other Systems)

One popular way to explicate a system's design ideas is to compare it to other,
similar systems, and/or to others' proposed criteria for such systems. There is
virtually no similar project known to the author, despite an exhaustive search
(see Bibliography). A couple tangential efforts will be mentioned, followed by
a discussion of how the system will measure up to the understanding standards set
forth by Moore and Newell in their MERLIN paper.

Several projects have been undertaken which comprise a small piece of the proposed
system, plus deep concentration on some area ⊗4not⊗* under study here. For example,
Boyer and Moore's theorem-prover embodies some of the spirit of this effort, but its
knowledge base is minimal and its methods purely formal.  Badre's CLET system worked
on learning  the decimal addition algorithm
⊗7(given the addition table up to 10 + 10,
plus an English text description of what it
means to carry, how and when to carry, etc.)⊗* but the mathematics aspects of the
system were neither emphasized nor worth emphasizing; it was an interesting natural
language communication study.  Gelernter has worked on using prototypical examples
as analogic models to guide search in geometry, and Bundy has used "sticks" to help
his program work with natural numbers.  One aspect that each of these systems lacked
was size: they all worked in tiny toy domains, with miniscule, carefully prearranged
knowledge bases, with just enough information to do the job well, but not so much that
the system might be swamped. The proposed system would have all the advantages and all
the dangers of a non-toy system with a massive corpus of data to manage.  The other
systems did not deal with intuition, or indeed any multiple knowlege source except
examples. Certainly none has considered the paradigm of ⊗4discovery and evaluation of
the interestingness of structure⊗*; the others have been "here is your task, try and
prove it,"  or, in Badre's case, "here is the answer, try and translate/use it."

There is very little thought about discovery in mathematics from an algorithmic
point of view; even clear thinkers like Polya and Poincare' treat mathematical 
ability as a sacred, almost mystic quality, tied to the unconscious.
The writings of philosophers and psychologists invariably attempt to examine
human performance and belief, which are far  more manageable than creativity
in vitro.  Belief formulae in inductive logic (eg., Carnap, Pietarinin) 
invariably fall back upon how well they fit human measurements. The abilities of
a computer and a brain are too distinct to consider blindly working for results
(let alone algorithms!) one possesses which match those of the other.

Finally, we discuss criteria for the system. The last section tried to say
that the two important criteria are final performance and initial starting point.
That is, what is it given (including the knowledge in the program environment),
and what does it do with that information.  Moore and Newell have published some
reasonable design issues for any proposed understanding system, and we shall now
see how our system answers their questions. 
⊗7(Aside: Each point of the taxonomy which they
provide before these questions is covered by the proposed system).⊗*

.BEGIN W(6)  NARROW 4,0  

Representation: Families of BEINGs, simple situation/rules, opaque functions.
	Scope: Each family of β's characterizes one type of knowledge. 
			Each β represents one very specialized expert.
			The opaque functions can represent intuition and the real world.
	Grain: Partial knowledge about a topic X is naturally expressed as an incomplete BEING X.
	Multiple representations: Each differently-named part has its own format, so, e.g.,
		examples of an operation can be stored as i/o pairs, the intuition points to an
		opaque function, the recognition section is sit/action productions, the
		algorithms part is a quasi-executable partially-ordered list of things to try.
Action: Most knowledge is stored in β-parts in a nearly-executable way; the remainder is
	stored so that the "active" segment can easily use it as it runs.  The place that
	a piece of information is stored is carefully chosen so that it will be evoked
	in almost all the situations in which it is relevant.  The only real action in the
	system is the selective completion of β's parts (occasionally creating a new β).
Assimilation: There is no sharp distinction between the internal knowledge and the
	task; the task is really nothing more than to extend the given knowledge while
	maintaining interest and asethetic worth.  The only external entities are the
	user and the simulated physical world. Contact with the first is through a
	simpleminded translation scheme, with the latter through evaluation of opaque
	functions on observable data and examination of the results.
Accomodation: translation of alien messages; inference from (simulated) real-world examples data.
Directionality: The Environment gathers up the relevant knowledge at each step to fill
	in the currently worked-on part of the current β, simply by asking that part
	(its archetypical representative), that β, and its Tied β's what to do.
	Keep-progressing: at each stage, there will be hundreds or thousands of unfilled-in
		parts, and the system simply chooses the most interesting one to work on.
Efficiency: 
	Interpreter: Will the contents of β's parts be compilable, or must they remain
		completely inspectable? One alternative is to provide two versions, one
		fast one for executing and one transparent one for examining. 
		Also provide access to a compiler, to recompile any changed (or new) part.
	Immediacy: There need not be close, rapidifire comunication with a human,
		but whenever communicating with him, time ⊗4will⊗* be important; thus the
		only requirement on speed is placed upon the translation modules, and
		they are fairly simple (due to the clean nature of the mathematical domain).
	Formality: There is a probabilistic belief rating for everything, and a descriptive
		"Justifications" component for all β's for which it is meaningful.
		There are experts who know about Bugs, Debugging, Contradiction, etc.
		Frame problem: when the world changes, make no effort to update everything.
			Whenever a contradiction is encountered, study its origins and
			recompute belief values until it goes away.
Depth of Understanding:  Each β is an expert, one of whose duties is to announce his
	own relevance whenever he recognizes it. The specific desire will generally
	indicate which part of the relevant β is the one to examine. In case this loses,
	each β has a part which (on the basis of how it failed) points to alternatives.
	Access to all implications: The intuitive functions must simulate this ability,
		since they are to be analogic. The β's certainly don't have such access.

.END
.SSEC(Results)

There are several possible outcomes of all this. Even the most dismal
would yield some information about theory formation. At the optimistic extreme,
the system would yield new theorems in mathematics and new ways of approaching 
existing ones.

The ideal would be for the system to find a useful
redivision of some concepts, and new concepts overlooked by mathematicians.
The next best result would be the re-discovery and re-development of
existing mathematics, but only by being carefully led along the "right"
path. Even here, one should demand that it not be given so much to
start with that the
development is boringly direct and/or the
end results are obviously predictable. ⊗7(They are of course theoretically
predetermined, in the Turing machine sense).⊗*

Even if the system never gets beyond the  most elementary levels in each
field, that very failure will indicate for the first time a lower bound
on the magnitude of the theory formation problem. If our best efforts
produce only meager results, we will have to rethink the set of 
strategies over and over again. This might actually result in a better
final set of strategies than if the original set (chosen by introspection)
performs well!  

How much the strategies must adapt as the system proceeds is not known,
and will be learned during the experiment. It is hoped that such notions as
"how to use the strategies", interestingness, etc., need not evolve as well.
They will be tuneable only by the system's creators. 

If all the necessary initial facts, intuitions, and strategies can be generated from
a tiny hand-selected core of same, 
that alone is worth study. No one has investigated either
of the two ideas upon which this depends: 
that such a basis exists, and that no special
techniques are necessary to expand that core. Any difficulty here will indicate how
a self-contemplative process must differ from a purely investigative process.
A related experiment is to selectively remove parts of the "core", and see:
(i) if they are discovered anyway by the remainder, (ii) if so: when, why, how,
by whom, (iii) if not: which of the results that they led to are rediscovered anyway,
and which seem lost forever to the system?

.NSEC(INTERNAL ACTIVITY)

This tentative description is meant to get things off the ground.
As more demands are imposed, it may well crumble, hopefully to reveal a better
internal organization.

A ⊗4BEING⊗* is simply a collection of parts.
Each part consists of a name and an internal value. 
Each BEING must belong to one of a few existing
⊗4families⊗*; each family member has exactly the same set of parts (the
names are the same; the values vary with the specific BEING).
The value of all parts with the same name must be stored in a known format (which
can vary with the part name); all these formats are described in a single format.
Just as all BEINGs group into 5 families, so all parts fall into one of 4
major ⊗4part groupings⊗*.
All family members have the same set of parts names; all parts of a grouping
have some interrelated semantics. If the set of parts were ideally
orthogonal, one wouldn't have any meaningful parts groupings. There is an
⊗4advantage⊗* to the grouping, however: that of factoring. One needn't choose
between an array of 16 parts; rather, make a choice of one of four groupings,
followed by a choice of one of four specific parts depending on the grouping.

.BEGIN W(1) NARROW 7,0

↓_BEING FAMILIES_↓
	Objects:	Primitive Containers, Structures, Assertions
		⊗7examples: variable, list, axiom⊗*
	Actives:	Operations, Relations, Properties 
		⊗7examples: insert, containment, ordered⊗*
	Static-Metas:	Unjustifiable, Partially justified, Justified, Mathematica
		⊗7examples: assumption, analogy, theorem, formal system⊗*
	Active-Metas:	Inference, Test, Communicate
		⊗7examples: analogize, debug, User-model⊗*
	Archetypical:	Recognition, Alter-self, Act-on-another, Info
		⊗7examples: changes-in-world,  boundary,  algorithms,  definition⊗*


↓_PART GROUPINGS_↓
	Recognition:	Changes, Final, Past, Iden
	Alter-self:	Generalizations, Specializations, Boundary, Domain/range, 
				Ordering, Worth, Interest, Justification, Operations
	Act-on-another:	Change another (Boundary-ops, Fillin, Restructure, Algorithms),
				Interpret another  (Check, Representation, Views)
	Info:		Definition, Intuition, Ties, Examples, Contents


.END

To each β part corresponds an archetypical β,
giving information about any part having
that name (how to fill it in, when to extend it, etc.) One part of each 
archetypical BEING (say S), is called Representation,
and describes the format in which each
BEING (say B) must keep info. stored inside its S-part (which will be called B.S).

A rather specialized ⊗4environment⊗* exists to support these BEINGs. Encoded as
efficient opaque functions, the environment must oversee the flow of control
in the system (although the BEINGs themselves make each specific decision as to
who goes next). It must also include evaluations of belief, interest, supeficiality,
safety, utility; it must keep brief statistics on when and how each part of each
BEING is accessed; and the environment must maintain a rigidly-formatted
description of the Current Situation (abbreviated CS; this 
structure also includes summaries of recent system history).
When a part is big and heavily accessed, detailed records
must be kept of each usage (how, why, when, final result) of each ⊗4subpart⊗*.
Based on this, the part may be split into a group of new BEINGs, and the value
of the part replaced by a pointer to a list of these new BEINGs. 

The environment would have to accept the returning messages of the attempt to
deal with a certain part of a certain BEING. A success or a failure would mean
backing up to the last decision and re-making it
(usually the top-level "select (P,B) to work on next" decision).
An "interrupt" from a trial
would mean "here is some possibly more interesting info". The environment
must decide if it is; if not, it returns control to the interrupted process.
If so, it automatically switches to that part of that BEING (the part may
not be specified). Later, there will be no automatic return to the interrupted
process, but whatever sequence of decisions led to its initiation may very
probably lead there again.
Two tricks are planned here. One is a cache: each BEING will let  its
RECOG parts store the last value computed, and let each
such part have a quick predicate which can
tell if any feature of the world has changed which might affect this value.
If not, then no work is done; the old value is simply
returned. If x is interrupted,
an auxilliary development is begun, and then work on x should continue,
most of the decisions leading back to x will probably not involve any real
work, since most of the world hasn't changed. The second trick is that to
evaluate a part, one moves down its code with a cursor, evalling. When
interrupted, that cursor is left just at the point one wants to start at when
the work resumes.

New BEINGs are created automatically if, when a part is evaluated and a new entity
formed, it has sufficient interest to make it worth keeping by name.
Also, an existing part P of a BEING B may be replaced by a list of new BEINGs.
The details of when and how to do this restructuring of B.P are stored under the 
Structure part of the archetypical β whose name is P. The typical process is as follows:
The environment keeps loose checks on the size and usage of each part; if one
ever grows and eats up much time, it is carefully monitored. Eventually, its
subparts may partition into a set whose usage is nearly disjoint. If this
is seen, then the part is actually split into a new set of BEINGs.
If a new BEING doesn't live up to its expectations, it may be executed
summarily (overwritten and forgotten; perhaps enough is remembered to not waste
time later on the same concept).


.SELECT 6
One difference from PUP6 is that here the BEINGs are grouped into families.
Each family has its own set of parts (although there will be many parts present in
many families, e.g. Iden). For each family F there will be a fairly general β named
F. Under each part of F
is general information which, though not applicable to all β's, is applicable to all
β's belonging to family F.  Similarly, if P is a part name, then the β named P contains
information which is useful for dealing with part P of any β. There might also exist
an archetypical BEING named F.P, who would have special information for working with
part P of any BEING in family F.  There might even be a BEING called B.P, where B is
some specific BEING, with information that just deals with part P of B and any future
specializations of B. The information stored inside a part of a BEING, for example
the actual contents of B.P, would be 
code capable of computing
B's answer to the question P; the previously
mentioned archetypical BEING named B.P would contain strategies for dealing with such
answering code (how to fill it in, how to check it, etc.).  
⊗7To reiterate:   the contents of a part
are specific knowledge, a little program which can answer a specific query, whereas
the contents of the parts of an archetypical β are partially ordered sets
of strategies
for dealing with that part of that type of BEING (how to
extend it, what its structure is, and so on). ⊗*
Notice we are saying that all the parts with
the same part name, of BEINGs in the same family, must all have the same structure.
⊗7This is one additional level of structure from the BEINGs proposed in PUP6.⊗*

.SELECT 1

When part p of BEING B is filled out, at some point in the sequence S of strategies
listed under the archetypical BEING named B.p or p, some new 
information may be discovered. If S cannot handle this knowledge, then  it will
simply return with the message "I am not through, but here is some fact(s) which
may mean that filling out part p of B is no longer the best activity".
The environment is aware that BEINGs and
parts are both organized into clumps or groupings. When such
an interruption
is reported, the environment will generally pass it on to the clump which
made the last relevancy decision (it first decides if the info. is interesting;
if not, control resumes immediately where it left off). 
A clump may be a part-grouping BEING, a BEING family BEING, a subfamily BEING,...
If the clump regains
control, its first duty is to quickly determine
whether or not it is still the best clump to be in control. If not, it relinquishes
control to the environment,
which asks the clump which called the first one, etc. If it is still the relevant
clump, the environment asks "who wants to continue on from this point".
The selected part and BEING may turn out the same, or may change due to the new
info which was just uncovered.
The flavor of the return should thus be one of: Not Done because x is
possibly more interesting; Not Done because x is a prerequisite to doing me;
Done because I succeeded; Done because I failed utterly.

The lower-level BEINGs will provide fast access to well-organized information.
The background environment provides the necessary evaluation services at
high speeds (though the system cannot meaningfully examine, modify, or add to
what environment functions the creators provide).
The BEINGs hold "what to think"; the environment implicitly controls "how to think".
The big assumption is that one may think creatively without knowing how his thought
processes operate; intelligence does not demand absolute introspection.

Each clump is (at least partially) ordered, hence can be executed sequentially.
The result may be to choose a lower-level clump, and/or modify some strategies
at some level (some part of some BEING), and/or create new strategies at some
level (perhaps even to create a new BEING). These lattter creations and calls
will be in the form of strong suggestions to the environment.

Common knowledge should in some cases be factored out. Possibilities:
(i) always ask a specific BEING, who sometimes queries a more general
one if some knowledge is missing; (ii) always query the most general
BEING relevant, who then asks some specific ones (This sounds bad);
(iii) ask all the BEINGS pseudo-simultaneously, and examine the
responders (this sounds too costly.) The organization of BEINGs into
hierarchical groupings reflects the spirit of (ii). A BEING only
contains additions and exceptions to what its generalization contains,
so (i) is actually the dominant scheme now envisioned.

.SKIP TO COLUMN 1
.SSEC(The ENVIRONMENT)

.QP2←PAGE

COMPLETE(P,B) means fill in material in part P of BEING B. 

@21. Locate P and B.⊗*
If P is unknown but B is known, ask B.ORDERING
and up↑*(B).ORDERING. Also,
there may be special information
stored in some part(s) of B earlier, by other BEINGs, which make them more or less
promising to work on now.
[ up↑*(B).P means the set of BEINGs named P, B.P, 
(B.Ties.Up).P,
((B.Ties.Up).Ties.Up).P,
etc. ]

If B is unknown but P is known, ask P and ask each β about interest of filling in P.
Each β runs a quick test to see if it is worth doing a detailed examination.
Sometimes the family of B will be known (or at least constrained).

If neither is known, each β must see how rele. it is; the winner decides on P.
If there is more than one β tied for top recognition, then place the results
in order using the environment function ORD, which examines the Worth components
of each, and by using the value of the most promising part to work on next for each
BEING. The frequent access to the (best part, value) pair for each BEING means that
its calculation should be quick; in general, each β will recompute it only when new
info. is added to some part, or at rare intervals otherwise.
After ranking this list, chop it off at the first big break in values, and print it out
to the user to inspect. Pause WAIT seconds, then commence work on the
first in the list. 
WAIT is a parameter set by the user initially. ⊗7(0 would mean go on unless user
interrupts you, infinity would mean always wait for user's reply, etc.)⊗*
When you finish, don't throw the list away until after the
next B is chosen, since the list might immediately need to be recomputed! 
If the user
doesn't like the choice you've made, he can interrupt and switch you over.
A similar process occurs if P is unknown, (except the list is never saved).

@22. Collect pointers to helpful information: ⊗*
 Create a (partialy ordered) plan for dealing with part P of BEING B (abbreviated B.P).
 This includes the P.FILLIN part, and in fact any existing up↑*(B).P.FILLIN, and
 also some use of the representation, defn, views, dom/range parts of the P BEING.
 Consult ALGORITHMS and FILLIN parts of B and all upward-tied β's to B.

@23. Decide what must be done now⊗*; 
 which of the above pieces of information is "best". Tag it as having been tried.
 If it is precisely = one currently active goal, then forget it and go to 3.

@24. Carry out the step.⊗* (Evaluate the interest of any new BEING when it is created)
 Notice that the step might in turn call for accessing and (rarely) filling
 in parts of other BEINGs. This activity will be standard heirarchical calling.
 As parts of other BEINGs are modified, update their (best part, value) estimate.

@25. When done, update.⊗*
 Update statistics in B, P, and current situation. (worth and recog parts)
 If we are through dealing with B.P (because of higher interest entity ∃,
 or because the part is filled in enough for now) goto 1; else goto 3.
 If you stop because of higher interest entity, save the plan for P.B inside P.B.

.BEGIN W(1) NARROW 5,0

ACCESS(K,P,B) means access pieces of knowledge K from part P of BEING B.

1. Locate each argument
	Typically given K. Find P' by asking archetypes, B' by asking all BEINGs.
	By iterating through this loop, the sets P' and B' will become singletons.
	As they become smaller, more individualized effort can be spent on distinguishing the choice.
2. Interpret the material in part P of BEING B.
	Use the representation part of P. 
3. Match K to this pattern, and try to extract it directly. 
	Often this will entail evalling or applying B.P.
	Evaluation is viewed as just one technique for processing a clump of knowledge, B.P,
		and extracting the precise bit K which is desired.
4. If the accession fails, consider P.VIEWS, consider setting up a message, consider
	giving up. Let the interest of the current goal (activation energy) be your guide.


CURRENT SITUATION is a vector of weights and features of the recent behavior of the system.
.FILL

The Environment also maintains a list of records
and statistics of the recent past activities, in a structure called CS, 
for "Current Situation".
Each Recognition grouping part is prefaced by a vector of numbers which are
dot-multiplied into CS, to produce a rapid rough guess of relevance.
Only the best performers are examined more closely for relevance.
The representation of each CS component is (identification info, motivation,
safety, interest, work done so far on it, final result or outlook). The
actual components might be:
.NOFILL
Recent Accesses.   For each, save (B, P, contents of subpart used).
Recent Fillins.    Save (B, P, old contents which were altered).
Current Hierarchical History Stack.  Save  (B, P, why).
Recent Top-level B,P pairs.
A couple significant recent but not current hierarchical (B,P,why) records.
A backward-sorted list of the most interesting but currently-deferred (B,P) fillins.
A few recent or collossal fiascos (B, P, what, why this was a huge waste).


ORD(B,C)  Which of the recognition-tied BEINGs B,C is potentially more worthwhile?

.FILL

This simple ordering function will probably examine the Worth vectors,  perhaps
involving the sum of weighted factors, perhaps even cross-terms such as
(probability of success)*(interest rating).

.SELECT 6; NOFILL; NARROW 3,0


PLAUSIBILITY(z)       How believable is z?    INTEREST(z)    How interesting is z?

         each statement has a probability weight attached to it, the degree of belief
         this number is a fn. of a list of justifications
	 Polya's plausibility axioms and rules of inference
         if there are several alternate justifs., it is more plausible
         if some consequences are verified, it is more plaus.
         if an analogous prop. is verified, it is more plaus.
         if the consequences of analogue are verif., it is slightly more plaus.
         the converses of the above also hold
         believe in those things with high enough prob. of belief (rely on them)
         this level should fluctuate just above the point of belief in contradictions
         the higher the prob., the higher the reliability
         the amt. one bets should be prop. to the reliability
         the interest increases as the odds do
         Zadeh: p(∧) is min, p(⊗6∨⊗*) is max, p(¬) is 1-.
         Hintikka's formulae (λ, αα)
         Carnap's formulas (λ)
         p=1 iff both the start and the methods are certain ←← truth
         p=0 iff both start is false and method is false-preserving ←← falsity
	 p is higher as the plausibility is higher, and as the interest is lower
         if ∃ several alternative plaus. justifs., p is higher
         don't update p value unless you have to
         update p values of contradictory props.
         update p values of new props
         maybe update p value if it is a reason for a new prop
      empiricism, experiment, random sampling, statistics
         true ideas will be "verified" in (consistent with) any and all experiments
         false ideas may only have a single exceptional case
	 a single exception makes a universal idea false
         nature is fair, uniform, nice, regular; coincidences have meaning
         more plaus. the more cases verified
         more plaus. the more diff. types of cases verified
         central tendency (mean, mode, median)
         standard deviation, normal distribution
         other distributions (binomial, Poisson, flat, bimodal)
         statistical formulae for significance of hypothesis
      regularity, order, form, arrangement
         economy of description means regularity exists
         aesthetic desc (ana. to known descs. elsewhere)
         each part of desc. is organized regularly
         the parts are related regularly

  Below, αα means ⊗4increases with increasing⊗* (proportionality), and
  αα↑-↑1 means ⊗4decreases with increasing⊗* (inversely proportional).
  Perhaps one should distribute these morsels among the various concerned β's:
   Completeness of an analogy  αα  safety of using it for prediction
   Completeness of an analogy  αα↑-↑1 how interesting it is
   How expected a relationship is  αα↑-↑1  how interesting it is
   How intuitive a conjec/relationship is  αα↑-↑1  how interesting it is
   How intuitive a conjec/relationship is  αα  how certain/safe it is
   How superficial something is  αα  how intuitive it is
   How superficial something is  αα  how certain it is
   How superficial something is  αα↑-↑1 how interesting it is

  Perhaps included here should be universally applicable algorithms for applying these rules
  to choosing the best strategies, as a function of the situation.

   One crude estimate of interest level is the interest component of the current β's
   Modify this estimate in close cases using the above relations
   Generally, choose the most specific strategies possible
   If the estimated value of applying one of these falls too low, try a more general one
   Rework the current β slightly, if that enables a much more specific strategy to be used
   Locate specific concepts which partially instantiate general strategies
   The more specific new strategies are associated with the specific info. used
   Once chosen, use the strategies on the most promising specific information
   If a strat. falters: Collect the names of the specific, needed but blank (sub)parts
      Each such absence lowers int. and raises cost, and may cause switch to new strategy
      If too costly, low int, store pointer to partial results in blank parts 
         The partial results maintain set of still-blank needed parts

   Competing goals: On the one hand, desire to maximize certainty,
      safety, complete analogies, advance the level of intuition.
      On the other hand, desire to maximize interestingness, find poss. and poten. interesting ana.
       find unexpected, nonsuperficial, and unintuitive relationships.
   If an entity is used frequently, it should be made efficient.
      Conversely, try to use efficient entities over nearly
      equivalent (w.r.t. given purpose) but inefficient ones.
   If an entity is formally justified but resists intuitive comprehension, its use is
      dangerous but probably very interesting; ibid for intuitive but unprovable.
   Resolve choices in favor of aesthetic superiority

   Maximize net behavior
    Maximize desired effects
      In this case, prefer hi interest over hi safety.
      Generally preferred to the folowing case.
    Minimize costs, conserve resources
      In this case, prefer safety to interest.
      Locate the most inefficient, highest-usage entity, and improve or replace it
      Use: If time/space become a problem, worry about conservation until this relaxes.
.END

.NSEC(INITIAL KNOWLEDGE and REPRESENTATION)

This section proposes a corpus of information, some of  which will be carefully
constructed, and all of which should
be present in the system before the user approaches it.
This presentation will be repeated at several levels of detail, so that
the reader will obtain a global view before going into detail.
The deeper the level, the more definite  the assumptions which are needed in
order to fill out the knowledge. Even at the descriptive level in this
document, some representation decisions had to be tentatively assumed.

The theme of a BEINGs system is to distribute the understanding of knowledge among
all the parts of all the modules. Thus there will be many different ways in which
the system can claim to understand something. For example, it might be able to carry
out some activity (Algorithms), to formally discuss that activity (Definition), to
relate it to other activities it knows about (Ties), and even to give vivid intuitive
imagery to aid in visualizing the essence of the activity (Intuition).
Some of the knowledge present
initially will be stored in each of these forms.
The actual ways to represent the knowledge, especially
intuitive knowledge, are of some interest.  As before, the presentation
will be repeated at a few different levels of detail.
Since the representation must be known before the knowledge can be understood
in that format, details about our representations precede details about the
core of facts and strategies initially supplied to the system.

.SSEC(Representation: Level 1)

The two broad categories of knowledge are definite and intuitive. To represent
the former, we employ (i) rules and assertions, (ii) BEINGs grouped into families,
and (iii) opaque Environment functions. To represent the latter, we employ
(i) abstract rules, (ii) pictures and examples, and (iii) opaque Environment functions.


.SSEC(Initial Knowledge: Level 1)

The following is a sketch of how the top level of knowledge in the system
is organized. Each node in the right lower section is both a BEING and the  
prototypical representative of a family of BEINGs.
The Environment node stands for a collection of opaque background system functions.

.B7

 				  ⊂ααααααααααα⊃
				  ~ Knowledge ~
				  %αααααπααααα$
   				        ~
           ⊂αααααααααααααπααααααααααααααβαααααααααααααααααπαααααααααααααπαααααααα⊃
           ↓             ↓              ↓                 ↓             ↓	 ↓
   Environment      Active-Meta      Static-Meta        Active       Static    Parts

.QP←PAGE
.E

.SSEC(Representation: Level 2)

Each currently popular formalism for representing knowledge
represents a point somewhere along (or very near to) the ideal
"intelligence vs. simplicity/speed" tradeoff curve.  Another way to
see this is to picture each representation as some tradeoff between
structure and uniformity, between declarative and procedural
formulations.  Each idea has its uses, and it would be unwise to
demand that any single representation be used everywhere in a given
system.  One problem with the alternative is how to interface the
multiple representations.  Usually this is enough to persuade
system-builders to make do with a single formalism.  The proposed
system will be pushing the limits of the available machinery in both
the time and space dimensions, and therefore cannot indulge in such
luxuries!  Knowledge used for different types of tasks must be
represented by the most suitable formalism for each task. 

BEINGs are higly structured, intelligent, but slow. Rules and
assertions are more uniform and swift, but frequently awkward. 
Compiled functions win on speed but lose on accessability of the
knowledge they contain.  Pictures and examples are universal but
inefficient in communicating a small, specific chunk of information. 

Let us now partition the types of tasks in our system among these
various representations.  The frequent, opaque system tasks (like
evaluating interestingness, aesthetic appeal, deciding who should be
in control next) can be programmed as compiled functions (the only
loss -- that of accessability of the information inside -- is not
relevant since they should be opaque anyway). 

The specific math knowledge has many sophisticated duties, including
recognizing its own relevance, knowing how to apply itself, how to
modify itself, how to relate itself to other chunks of knowledge,
etc. It seems appropriate that BEINGs be used to hold and organize
this information. The main cost, that of slowness, is not critical
here, since each individual chunk is used infrequently, and a wrong
usage is far more serious than a slow usage.  One final factor in
favor of using BEINGs here is that all the knowledge that is
available at the time of creation of a new β will find its way to the
right place; any missing knowledge will be conspicuous as a blank or
incomplete β part. 

The contents of each part of each β is composed of specialized rules,
assertions, and pointers to other parts and other BEINGs. The
knowledge may have to be altered from time to time, hence must be
inspectable and interpretable meaningfully and easily, so compiled
code is ruled out. To demand that each part of each β be itself a β
would trivially cause an infinite regress. Hence the reliance upon
"intermediate" representations. 

Communication between very different entities, for example between
the User and a β not designed to talk with him, are best effected via
a picture language and/or an examples language (from which the
receiver must infer the message). Such universal media are proposed
for faltering communications, for holding and relating intuitions of
the essences of the knowledge chunks stored in the BEINGs. 

The representation of intuitive knowledge as pictures and examples is
certainly not original.  Set theory books usually have pictures of
blobs, or dots with a closed curve around them, representing sets.
For our purposes, a set will be represented in many ways.  These
include pointer structures for ⊗6ε⊗*, ⊂, and their inverses; analytic
geometric functions dealing with sets as equations representing
regions in the plane; prototypical examples of sets; a collection of
abstract rules for simulating the construction and manipulation of
sets; and, finally, a set might be intuitively represented as a
square in the cartesian plane.  All these are in addition to the
definite knowlege about sets (definition, axioms and theorems about
sets, specific analogies to other concepts). 

The notion of a fuzzy rule will remain fuzzy throughout this
document. The basic idea is that of a production system, with left
sides that can latch onto almost anything, which eventually generate
lots of low-certainty results. These would augment some
β's intuition parts, and when trying to relate two given BEINGs which
both had such fuzzy abstract rules, one might try to "run" the
combined production system, or merely to "compare" the two systems. 
As with pictures and examples, the benefits of universality and
adaptability outweigh the inefficencies. 

Opaque simulations of (about a dozen) real-world situations is another important
component of the representation of intuitive knowledge. For example,
there might be a simulated jigsaw puzzle, with models of pieces,
goals, rules, hints, progress, extending, etc.  There will be a
simulated playground, with a seesaw model that will respond with what
happens whenever anything is done to either side of the seesaw. There
will be flamboyant models, like mountain-climbing; mundane ones like
playing with blocks; etc.  The obvious representation for these simulations
is compiled functions, which are automatically opaque. 

.SKIP 2

.SSEC(Initial Knowledge: Level 2)

.ONCE TURN ON "{"
Below are diagrams of the knowledge present under each of the six major categories
of knowledge, as pictured in Initial Knowledge, Level 1, on page {QP}.
The first sketch indicates the major structures and functions in the
environment which the BEINGs see. Notice that the intuitive simulations don't
appear here; they are distributed among the INTU parts of all the BEINGs.

.B7
			        ⊂ααααααααααααα⊃
				~ Environment ~
				%αααααααπααααα$
					~
		⊂ααααααααααααπααααααααααβαααααααααπααααααααααα⊃
		↓	     ↓          ↓	  ↓	      ↓
  	     Interest     Control    Belief    Choice     Current-Situation
.E

The next five trees show the individual 
BEINGs present in each of the five families of BEINGs.
Each node is a β; almost all the β's  envisioned are present in the sketches below.

.B7

				⊂ααααααααααααα⊃
				~ Active-Meta ~
				%αααααααπααααα$
					~
	  ⊂αααααααααααααααααααααααααααααβαααααααααααααααααααααααααααα⊃
	  ↓				↓			     ↓
	Infer			      Test		    	Communicate
  ⊂ααααπαα∀ααααπααααααααπαααααααα⊃      ~			/        \
  ↓    ↓       ↓        ↓        ↓      ~                      /          \
Find Guess Analogize Conserve Examine   ~	With other BEINGs      With the user
					~			      /           \
			     ⊂ααααααπααα∀αααπαααααααα⊃       Translation  User-Model
			     ↓      ↓       ↓        ↓	     /          \
			 Disprove Debug   Assume   Prove  Into-English  From-English
			  /    \                     ~
		Constructive   Indirect     ⊂ααααααααβααααααααπαααααααααπαααααααα⊃
					    ↓        ↓        ↓         ↓        ↓
				  Natural-Deduc. Backward Indirect Existential Univ.
					    ↓			    /      \
					  Cases		 Constructive      Indirect
.E

One point to notice is that testing and inferring activities (above) 
are separated from the
⊗4by-products⊗*   
of testing and inferring
(below), namely conjectures, proofs, and counterexamples.
The former are things to do, the latter are objects which are static.
One can use a theorem, e.g., without remembering or caring how it was proved.


.B7
 				 ⊂ααααααααααααα⊃
				 ~ Static-Meta ~
				 %ααααααπαααααα$
					~
	       ⊂ααααααααααααααααααααααααβααααααααααααααααααααααααπααααααααααααααααα⊃
	       ↓ 			↓			 ↓		   ↓
.ONCE TURN ON "α"
	Non-justifiable		   Quasi-justified    	  Fully-justified       Math
	/      |      \             /     |     \          /     |     \	   ~
Assumption Message Contradiction Analogy Bug Conjecture Proof Theorem Counterex.   ~
										   ~
				     Mathematical Theory, Basis, Formal System ←ααα$
.E

Although conjectures are far removed from belief (in the tree), the environmental
routines permeate throughout temporal and arboreal space. Belief and interest
are constantly being evaluated.

.B7


				   ⊂αααααααα⊃
				   ~ Active ~
				   %ααααπααα$
					~
	⊂αααααααααααααααααααααααααααααααβααααααααααααααααααααααααααααααα⊃
	↓				↓				↓
    Operation			     Property			    Relation
	~				~				~
	~			      Ordered			       / \
.ONCE TURN ON "α"
	~                                    ⊗7Equals Member Contain Equivalent Ordering Quantification⊗*
	~
       / \
.ONCE TURN OFF "@"; TURN ON "α"
⊗7Compose Insert Delete Convert Subst Rule ∨ ∧ Unite ∪ Common-parts ∩ Setdifference⊗*@
.APART
.GROUP



				   ⊂αααααααα⊃
				   ~ Static ~
			           %ααααπααα$
					~
	⊂αααααααααααααααααααααααααααααααβααααααααααααααααααααααααααααααα⊃
	↓				↓				↓
Primitive Containers		   Structures			   Assertions
	~				~				~
   ⊂αααα∀ααααπααααααα⊃	       ⊂ααααπαααβααααπααα⊃		      Axioms
   ↓         ↓       ↓         ↓    ↓   ↓    ↓   ↓
Ord.pair  Variable  T,F      Hist List Oset Bag Set
.APART
.GROUP



		      ⊂αααααααααααααααααααααααααααααααααααααα⊃
	   	      ~	Parts (Archetypical Strategy BEINGs) ~
		      %αααααααααααααααααπαααααααααααααααααααα$
					~
	     ⊂ααααααααααααααααααααπααααα∀ααααααααπααααααααααααααα⊃
	     ↓			  ↓		 ↓		 ↓
	Recognition		Alter		Act		Info
	     ~			   /		/ \		 ~
    ⊂αααααπαα∀ααπαααα⊃		  /	       /   \	    ⊂ααααβααααπαααπαααα⊃
    ↓     ↓     ↓    ↓		 / 	      /     \       ↓    ↓    ↓   ↓    ↓
Changes Final Past Iden		/       Interpret Change  Defn Intu Ties Exs Contnts
			       /             ~       ~
			      /	    ⊂αααπααααλ       εααααααπααααααπααααααα⊃
			     /	    ↓   ↓    ↓       ↓	    ↓      ↓       ↓
			    /	Check Repr Views  Bdy-ops Fillin Struc Algorithms
			   /
	⊂αααααααααπααααααα∀∀παααααααααπαααααααπαααααπαααααααπαααααααπααααααα⊃
	↓         ↓         ↓         ↓       ↓     ↓       ↓       ↓       ↓
Genlzations Speclzations Boundary Dom/Range Order Worth Interest Justif Operations
.E
.SKIP TO COLUMN 1
.SSEC(Representation: Level 3)


At the moment, this section may appear to be a bizarre collection of data
too specific to be placed anywhere else. 

The first item is:  Which parts might a BEING have?  Below, we list all the possible
parts, and give a brief description of what questions each can handle.

.BEGIN W(6) NARROW 2,0

⊗4RECOGNITION GROUPING⊗*
 CHANGES		Is this rele. to producing the desired change in the world?
 FINAL  		What situations is this β rele. to bringing about?
 PAST			Where is this used frequently, to advantage?
 IDEN {not}{quick}	{fast} tests to see if this β is {not} currently referred to


⊗4ALTER GROUPING⊗*
 GENERALIZATIONS	What is this a special case of? How to make this more general.
 SPECIALIZATIONS	Special cases of this? What new properties exist only there?
 BOUNDARY		What marks the limits of this concept? Why exactly there?
 DOMAIN/RANGE {not} Set of (what one can{'t} apply it to, what kind of thing one {never} gets)
 ORDERING(Complete)	What order should the parts be concentrated on (default)
 WORTH	Aesthetic, efficency, complexity, ubiquity, certainty, analogic utility, survival basis
 INTEREST		What special factors make this type of BEING interesting?
 JUSTIFICATION   Why believe this? Formal/intu. For thms and conjecs. What has been tried?
 OPERATIONS  Properties associated with β. What can one do to it, what happens then?


⊗4ACT GROUPING⊗*
CHANGE subgrouping of parts
 BOUNDARY-OPERATIONS {not}  Ops rele. to patching {messing}up not-bdy-entities {bdy-entities}
 FILLIN  How to initially fill it in, when and how to augment what is there already.
 STRUCTURE 		Whether, When, How to retructure (or split) this part.
 ALGORITHMS		How to compute this function. Related to Repr.
INTERPRET subgrouping of parts
 CHECK   		How to examine and test out what is already there.
 REPRESENTATION  How should entities of type β be structured internally? Contents' format.
 VIEWS	(e.g., How to view any Active as an operator, function, relation, property, corres., set of tuples)
 


⊗4INFO GROUPING⊗*
 DEFINITION		Several alternative formal definitions of this concept. Can be axiomatic, recursive.
 INTU		Analogic interp., ties to simpler objects, to reality. Opaque.
 TIES   	Alterns. Parents/offspring. Analogies. Associated thms, conjecs, axioms, specific β's.
 EXAMPLES {not} {bdy}	Includes trivial, typical, and advanced cases of each type.
 CONTENTS       What is the value stored here, the actual contents of this entity.

⊗1**********************************************************************⊗*

.END

The next items of interest are which parts each BEING must have. 
In PUP6, each β had (theoretically) exactly
the same set of parts. Here, each ⊗4family⊗* will have the same set.
For each possible part, we list below those families having that part:

.BEGIN W(7); TABS 30,40,50,62,74; TURN ON "\"  GROUP
@2   ↓_Part Name_↓\Static\Active\Static Meta\Active Meta\Archetypical
.TABS 34,44,57,69,80
.INDENT 6

⊗6RECOGNITION GROUPING⊗*
 CHANGES\X\X\X\X\X
 FINAL\X\X\X\X\X
 PAST\X\X\X\X\X
 IDEN {not}{quick}\X\X\X\X\X

⊗6ALTER GROUPING⊗*
 GENERALIZATIONS\X\X\X\X
 SPECIALIZATIONS\X\X\X\X
 BOUNDARY\X\X\X\
 DOMAIN/RANGE {not}\\X\\X
 ORDERING(Complete)\\X\X\X
 WORTH\X\X\X\X
 INTEREST\X\X\X\X
 JUSTIFICATION\\\X\
 OPERATIONS\X\X\X\X

⊗6ACT GROUPING⊗*
		CHANGE subgrouping of parts
 BOUNDARY OPERATIONS {not}\X\X\X\
 FILLIN\\\\\X
 STRUCTURE\\\\\X
 CHECK\\\\\X
 ALGORITHMS\\X\\X\
		INTERPRET subgrouping of parts
 REPRESENTATION\X\X\X\X\
 VIEWS\\X\X\\

⊗6INFO GROUPING⊗*
 DEFINITION\X\X\X\X\X
 INTU\X\X\X\X\X
 TIES\X\X\X\X\X
 EXAMPLES {not} {bdy}\X\X\X\X\
 CONTENTS\X\X\X\X\

⊗1**********************************************************************⊗*

.END

We have dwelt on BEINGs so much, the reader is now entitled to hear about the
other representations. 
Since they are more conventional, there is less need to
delve into their details.
The rules are arranged in  pools, with several independent
pointer systems to locate rules relevant in various ways. The functions
are compiled Interlisp code, perhaps using CLISP and some of the QLISP features.
Even the terminology here suggests the importance of BEINGs over these formalisms:
the rules are mere parts of β's, and the functions are merely the
⊗4environment⊗*, the background for the BEING activities.

************************************************************************************

	↓_INTUITION FOR A SET:_↓
 
Let us now deal with the "square" representation for a set in  more detail.
A set S is characterized as a rectangle in the Cartesian plane; the opaque
intuition function knows about numerical equality and inequality, hence about
borders of such sets. The notions
of intersection, union, complement, setdifference, disjointness, projection
onto each axis, etc. are also intuitively available.  Notice that the
sophisticated operations required (e.g., projection) will exist as opaque
functions, totally inaccessable to the rest of the system. This is worth
rejustifying: is fair to write a LISP program (which uses the function
TIMES) whose task is to synthesize code for the function TIMES, so long as
the program does not have access to, does not even know about its use of
TIMES. 

This "square" representation is not well suited to all concepts
involving sets.
For that reason,
the system will simultaneously maintain several of the other forms of
intuitive storage mentioned previously.  Consider, for example, the
possibility of fuzzy rules, which can latch onto almost anything
and produce some type of result (but with low certainty). That is, they
operate at a higher level of abstraction than definite rules, by ignoring
many details. Another possibility is the use of examples. If a small set of
them can be found which is truly representative of a concept, then future
references to that concept can be compared to these examples.  This may
sound very crude, but I believe that people rely heavily (and
successfully!) on it.

Euler, to overcome language problems when lecturing a princess of
Sweden, devised the use of circles to represent sets. Venn and others
have frequently adopted this image. For a machine, it seems more a
propos to use a rectangle, not a circle.  Consider  the lattice of
integral points in two dimensions. Now a set is viewed as a rectangle
-- or a combination of a few rectangles -- in this space. This makes it
hard to get any intuition about continuity or boundary or openness, but
works fine for the discrete sets which are dealt with in logic, 
elementary set theory, arithmetic, number theory, and algebra. It is
probable that the system will therefore not be tried in the domains of
real analysis, geometry, topology, etc. with only this primitive notion
of space and confinement.  Specificly, a set in this world is an
ordered pair of pairs of natural numbers. Projection is thus trivial
in LISP (CAR or CADR), as is test for intersection, subset, etc.
Notice that these require use of numbers, ordering, sets, etc., so the
functions which accomplish them must be opaque.  The interaction
with the rest of the system will be for these pictures to suggest and
reinforce and veto various conjectures.  They serve to generate
empirical evidence for the rest of the system.
To avoid gerrymandering, it might be necessary to view a set as a list
(of arbitrary length) of ordered pairs; an absent pair can be assumed to be
some default pair. That is, a set is a simplex in Hilbert space; each set has
infinite dimension, but differs from any other in only finitely many of them.

How should the system choose which intuitive representation(s) of a set to use?
Some considerations are: 
	What operations are to be done to this set
(e.g., ⊗6ε⊗*, ⊂, ∩, ∪, ⊗6≡⊗*, =, ',...)? The representations differ in cost of
maintenance and in the ease with which each of these operations can be
carried out. 
	How artificial is the representation for the given set?
Some will be quite natural, e.g., if the set is a nest then use the
pointer structure; if the set is a relation over the small set AxB, then use the
lattice points representation.
	How much is "given away" by the model? This is a
question of fairness, and means that the system-writers must build in
opacity constraints and/or make the intuitive operations faulty.
We shall do both.
	How compatible is each representation with the computer's 
physiology?  Thus it is
almost impossible to represent pictures or blobs directly, but very
suitable to store algebraic equations defining such geometric images.
	Does the representation suggest a set theory with basic elements 
which are non-sets; with an infinite model; with any special desirable or
undesirable qualities? For example, the geometric representation
seems to demand the concept of continuity, which the system probably
won't ever use in any definite way.

************************************************************************************

There are about 125 β's in the proposed core, 
and each one of them should have an intuition almost
as rich as that for SETS, above. Space precludes delving into each one; some few
lines about each β's intuition is present in the document "⊗4GIVEN KNOWLEDGE⊗*".

.SKIP 2

	↓_REVIEW OF THE PARTS GROUPINGS_↓

In case the reader wants to see the breakdown of the parts again, they are
reviewed below, group by group.  The particular families are not mentioned,
since most of the parts occur in most of the five families anyway.
During system runtime, a part is filled in or extended
whenever a new idea becomes explicit.
The proximate driving force of the system is the urge to ⊗4complete⊗*
each BEING.  The true drivers are the judgmental criteria functions.

The four pictures below indicate the four main parts groupings, which in turn
reflect the four reasons for calling on a BEING or a part of one:
to see if it is relevant, to modify itself in some way, to deal with a
supplied argument (some part of some other BEING), or simply to answer a question
(accessable information). Under each category are several distinct parts and
in some cases further groupings of parts. Each grouping is itself a BEING; each
part is also represented by one archetypical BEING. In any given case, however,
the value stored in part of a BEING is simply some rules, pointers, numbers, etc.
The exact format of, e.g., part P of BEING B is specified in the REPRESENTATION 
part of the archetypical BEING
whose name is P.  In case some special information exists for dealing with B.P,
there may be another relevant archetypical BEING, whose name would actually be B.P.

.B7

			⊂ααααααα⊃
			~ RECOG ~
		        %αααπααα$
			    ~
		   ⊂αααααπαα∀ααπααααα⊃
		   ↓     ↓     ↓     ↓
    	      Changes  Final  Past  Iden

.E

The RECOG grouping is concerned with handling the following types of questions:
Are you relevant to effecting this change in the world..., Can you bring about this
state of the world..., How successful were you in situations similar to the
current one..., 
Can you recognize this phrase...
These four  types of questions are handled respectively
by the CHANGES, FINAL, PAST-USE, and IDEN parts.
.B7

			⊂ααααααα⊃
			~ ALTER ~
			~  self ~
			%αααπααα$
			    ~
			    ~
	⊂αααααααπαααααααααααβαααααααααααπααααααααπαααααααα⊃
	↓	↓	    ↓		↓	 ↓	  ↓
Generalize  Specialize  Boundary    Ordering   Worth     Ops
			    ↓                 /     \
.ONCE TURN ON "α"
			Dom/Range      Interest     Justification
.E

The ALTER grouping is concerned with handling the following types of questions:
What is the boundary of the current concept? Why does it exist; why can't you
relax some constraint and generalize yourself? Is there anything interesting
happening when you specialize yourself; how ⊗4can⊗* you specialize yourself?
How incomplete are you; what part should be attended to next? Are you worth
surviving; why, what good are you?  
What factors make a β like you interesting/uninteresting?
What can (can't) be done to you?
These types of questions are handled respectively
by the Boundary, Generalize, Specialize, Ordering, Worth, Interest, and 
Operations parts.
.B7

			⊂ααααααα⊃
			~ ACT w.~
			~ other ~
			εαααααααλ
			/       \
		       /         \
		      /	          \
		     /		   \
		    /		    \
	    Interpret		     Change
	    /   ~   \		       ~ 
Representation Views Check        ⊂αααα∀αααπαααααααααααααπαααααααααααααα⊃
				  ↓        ↓             ↓              ↓
			     Structure  Fillin  Bounday-operations  Algorithms

.E

The ACT grouping is concerned with handling the following types of questions:
How can this entity be pulled across your boundary? (Boundary operators part).
Most of the rest of the questions deal with BEINGs which
represent a part: whether to check to see if this part might be too "full";
if so, ⊗4how⊗* to check this; if indicated, how interesting should
the subpart(s) be before actually doing something? to act, do we split or
merely restructure? (Structure part)
What is the format of a typical one of you? (Representation part).
How much of this has been filled in so far? 
How do I doublecheck this information?
How do I fill in
some more? (Check, Fillin).
In general, there are two kinds of requests here. One is for actually changing
a part whose name is the name of this BEING (use the Change subgrouping). The other
kind of job is simply one of interpreting some aspect of such a part
(the Interpret subgrouping of parts).
.B7

			⊂ααααααα⊃
			~  INFO ~
			%αααπααα$
			    ~
			    ~
         ⊂ααααααααααπαααααααβαααααααπααααααααα⊃
    	 ↓	    ↓       ↓       ↓         ↓
.ONCE TURN ON "α"
    Definition  Intuition  Ties  Examples  Contents
			    ~
     ⊂αααααααααπααααααααααααβαααααααααααααα⊃
     ↓         ↓            ↓              ↓
Analogues   Family    Alternatives    Related-objects(thms, conjecs, axioms)

.E

The INFO grouping is concerned with handling types of questions dealing with
ubiquitous facts about this BEING. These include categories which are
needed by more than one of the preceding three groupings, those needed in
several different ways, those which other BEINGs might want to inspect, etc.
The names of the parts in the picture are self-explanatory.

.SKIP 5

A scheme for organizing the pointer systems for RULES now follows.
Each rule will have several types of pointers, to indicate relevant
rules. One set might be as follows:

.BEGIN W(1) INDENT 7

ABSOLUTE  The rules pointed to here should definitely be examined.
SUCCESS   If this rule succeeds, then look at these anyway.
FAILURE   If this rule fails, by a little, then look at these. (More descriptive, perhaps).
EXTEND    If a more comprehensive result is desired
CONTRACT  If a more restricted, simpler result is desired.
WORTH     What is this rule's expense of execution? Its chance of success?
          Point to cheaper rules/functions; point to costlier rules/BEINGS.
INTU      Point to abstract intuitive rules relevant to this rule.
DEF       Point to less abstract rules which are related to this one.

.E

Notice that the rule parts are simpler, fewer, and more uniform than the set
of BEING parts. A simple pool of unstructured rules might be all that is needed
(situation-action productions).  That is, each rule is executable, and has some of
the above 8 supplementary pointers filled in. The drive to fill in the pointers of
Rules is much lower than the drive to fill in parts of BEINGs.
Conceivably, the system might not even have such pointers attached unless the need
specificly arises. The structure of a part of a Rule is considered opaque, to prevent
any regress here, and to permit the rules to be coded for speed and compiled.

A major fraction of the environment will consist of absolutely opaque
functions, coded for maximum efficiency, 
which perform "primitive" functions absent in
INTERLISP but desirable for our system.
The precise representation of the efficient functions is not important,
since they are completely opaque to the rest of the system. Access to a
compiler should probably be permitted; once the system has an algorithm
to do something, there is no reason why it shouldn't be allowed to point
to a compiled routine for the same algorithm.
⊗7Indeed, most humans who use a compiler don't really understand or care about how it
works.  Even those who ↓_do_↓ understand it will typically just 
extract a few general
do's and don'ts and tricks, 
and not keep recalling pieces of the compiler's code.⊗*

.SKIP 3

.SSEC(Initial Knowledge: Level 3)

For each BEING, we now present a brief summary of the value stored in each of its
parts. 
If a part name is absent, it is expected that this will ⊗4NEVER⊗* be filled in for
this particular BEING. If the name is present but there is no value, then the
system might need to (and would, then) fill this part in sometime.

.SELECT 4
See GIVEN KNOWLEDGE document, please, for this information.    
In there you will find a few lines of information about each part of each of the
(roughly 125) BEINGs planned to be given to the system initially.
.SELECT 1
.NSEC(COMMUNICATION)

The work in this area consists of collecting English words and
grammatical constructions, of the kind found in various mathematics texts.
The next step is to exhaustively categorize all
words and phrases, and tie each one in to a BEING or a specific part. Also, some
fixed language scheme for communicating intuitive information must be devised.

Another ability which must be present is a DWIM-like recovery facility, tailored to
the kinds of errors one makes when discussing mathematics. For example, if someone
mentions "3+4", when + is defined only for rational numbers (a diffenret symbol
is emplyed for integers), then the error should be resolved by this simple bit of
psychology: "If an operation is applied incorrectly, and its real domain is in a very
closely analogous system, then map it back to find out which operator was really 
meant, and warn the speaker to be more precise in the future."

A third aspect is that of acclimatization to individual vocabulary and terminology.
For example, is a function from A to B necessarily defined on all of A?  One way to
acquire the user's preferences is during analysis of an error (as above); another way
is of course to allow the user to name the β himself (e.g., give him examples and 
intuition parts). His specific choices will go into the IDEN parts of the relevant
β's; if there is any possibility of contradiction with standard usage, the entry will
be tagged with the user's name, for future reference.  Of course a single user may refer
to the identical concept by more than one name, but the system should never permit
him to refer to two different things by the same name. In such a case, if the user
stands firm on the new entity, allow him to rename the older entity.

.NOFILL

.GROUP
.SSEC(Categories of Languages)

English ↔ BEINGs
  Standard Math Notation
	IMPLICATION
	SPECIFICATION
	COMBINATION
	OPERATION
	DEFINITION
	KNOWN RELATIONS
	ENTITIES
  Fixed Formats for Quasi-English Meta-Comments, Questions, Hints
	ACTIVITIES
	RESTRICTED CONCEPTS
	INTELLECTUAL PROCESSING
	TIME AND SPACE REFERENCES
	INDEFINITES
	QUESTIONS
  Fixed Language for Communicating Intuitive Concepts

BEINGs ↔ BEINGs
  The whole idea of BEING parts; especially: representation part of archetypical β's.
  Language for Intuitive Communication
  Language for Communication via Inference from Examples
.APART

.SSEC(Standard Math Notation)

.BEGIN W(7);  FILL RETAIN; INDENT 0,6,0

IMPLICATION
  IF ... THEN ...
  IMPLIES
  IFF
  IF
  ONLY IF
  IS IMPLIED BY
  THEREFORE
  THUS
  SUPPOSE ... THEN
  LET ... THEN
  THEN
  SO
  HENCE
  IN ORDER TO...
  IT SUFFICES THAT
  NECESSITY
  SUFFICIENCE
  →
  ←
  ↔
  WHENEVER
  WHEN
  CAUSE, CAUSALITY, BECAUSE
  ENTAILMENT

SPECIFICATION
  SUCH THAT
  SATISFYING
  WITH
  WHERE
  SOME
  THE
  A/AN
  ALL
  EVERY
  NO ... IN
  ⊗6∀⊗*
  ∃
  FIXED
  VARIABLE
  ANY
  EACH
  MOST
  THERE EXISTS
  WHICH
  THAT
  THIS
  OTHER
  ABOUT


COMBINATION
  AND
  OR
  ∧
  ⊗6∨⊗*
  NOT
  ⊗6¬⊗*
  ALSO
  BUT


OPERATION
  RELATION
  PREDICATE
  f/g/h
  DO
  APPLY
  COMPUTE
  OPERATE
  PRODUCE
  ACCORDING
  CORRESPOND
  ALGORITHM
  <silent imperative>
  COMPOSITION
  o
  MAP
  TAKE
  SEND
  PULL
  IMAGE
  RANGE
  DOMAIN
  f:D→R
  PREIMAGE
  UNDEFINED
  DEFINED
  f(a,b,c)
  CLOSED


DEFINITION
  DEFINE
  CALL
  =df
  NOTATION FOR ...
  REFER TO...
  NAME


KNOWN RELATIONS
  EQUALITY
  =
  IS/ARE
  INEQUALITY
  ORDERING
  GREATER
  LESS
  SUBSET
  ⊂
  ⊃
  CONTAINS
  INCLUDES
  MORE
  INTERSECTS
  ∩
  UNION
  ∪
  APPEND
  BETWEEN
  INSIDE
  OUTSIDE
  INCLUSION
  EXACTLY
  COMPLEMENT
  SETDIFFERENCE
  +,-,x for sets
  CONS
  CAR
  CDR
  FIRST
  LAST
  ALL BUT
  JOIN
  PREVIOUS
  PRECEDE
  SUCCEED
  FOLLOWING
  NEXT
  NEAR
  FAR
  CLOSE
  ANALOGOUS


ENTITIES
  ATOM
  ELEMENT
  CONSTANT
  VARIABLE
  SET
  TUPLE
  BAG
  MEMBER
  ⊗6ε⊗*
  THING
  ENTITY
  OBJECT
  IDENTIFIER
  NAME
  LABEL
  VALUE

.SKIP 2

.SSEC(Fixed Formats for Quasi-English Meta-Comments and Questions)

ACTIVITIES
  DO...
  CONSIDER...
  USE
  LOOP
  REPORT
  DISTINGUISH... AND/FROM ...
  EXPLAIN
  DISCUSS
  GET

RESTRICTED CONCEPTS
  ELLIPSIS
  ETC.
  AND SO ON
  ...
  pronouns
  SIMILARLY
  ANALOGY
  SIMPLIFY
  REDUCE
  FAILURE
  SUCCESS

INTELLECTUAL PROCESSING
  THINK
  CONCENTRATE
  CONSIDER
  ATTEND
  ASSUME
  SOLVE
  PROVE
  SEE
  HYPOTHESIS
  PROBLEM
  SOLUTION
  INVESTIGATE
  DISCOVER
  UNDERSTAND

TIME AND SPACE REFERENCES
  EARLIER
  LATER
  BEFORE
  AFTER
  THEN
  NOW
  NEVER
  ALWAYS
  HERE
  THERE
  UNDER
  ANYWHERE
  NOWHERE


INDEFINITES
  SHOULD
  WOULD
  COULD
  MIGHT
  POSSIBLE
  PROBABLE
  PLAUSIBLE
  BEAUTY
  POTENTIAL
  CAN
  forms of TO BE
  OUGHT
  CONFUSION
  DEFINITE/INDEFINITE
  CERTAIN/UNCERTAIN
  TRANSLATE
  DIFFICULTY
  PLEASURE
  SO
  UNIQUE
  EXISTENCE


QUESTIONS
  WHAT x
  WHY/WHY NOT x
  HOW
  WHEN

.END


.SSEC(Fixed Languages for Intuitive Communication)

No good new ideas have yet been found. At the moment, the plan is as follows:
.FILL
Each intuition will be an opaque function, which simulates some real-world situation.
The caller must specify as much as possible about the situation, after which the
function takes over and produces a description of what happens and/or the final state
of the world afterwards. The caller and the function together should know enough to
provide the caller with  the specific piece(s) of information desired. Often, the
kind of data provided will clue the intuition function as to what is wanted in 
return; often, the caller will know specifically what he wants back. Thus there may
not need to be any "language" in the normal sense of the word
(just some default schedule for calling).
Similarly, any BEING can
communicate any information by encoding it into examples and letting the receiver
decode it by inference from those examples. In that case, though, one must ensure a
universal sort of inference mechanism, perhaps an Infer-from-examples BEING with
whom it is easy for everybody to communicate directly.
Of course this is a very slow, inefficient mode of communication, and much information
may be lost or distorted.

An example: the old seesaw intuition. The function S simulates a seesaw, with
person p of weight w sitting on left or right side of seesaw, d distance from
the center, with person q of weight.... etc., for any number of people, and 
also says which way the seesaw was originally(left,right,balanced), 
which way it was finally,
and finally how quickly it moved from the start to the end. Any number of these
parameters may be left unspecified; the function will make an effort to provide
ranges for them, and/or examples of them. It is important to notice that the
function itself is not permitted to "give away" the fact that, eg., the names are
completely irrelevant, and that interchanging all lefts↔rights is equivalent.
That is, the function's ⊗4insides⊗* may know this when they compute the value, but
no BEING can ever access that information; the most he can do is look at lots of
examples and infer that invariance from them.
The actual code will compute L = the sum of (w x d) for each person on the left side,
R = sum of weight times distance from center for each person on the right side,
and the final activity is:
.BEGIN NOFILL INDENT 6

If L=R, then same as initial state, else Maximum(L,R).
If the state changed, the speed is proportional to the difference between L and R.
.END
The above are inverted easily in case the final change is given and a proposed
configuration of sitters is the desired unknown.
.NSEC(EXAMPLES)

Before deluging the reader with detailed traces of the proposed system execution,
let us skim over a few situations and see 
how it would handle them, how it would discover
some of the interesting information which we won't supply it with initially.

.SKIP 3

.SSEC(Example 1: Discovering New Properties of Relations)

Consider first how some of the commonly-supposed "primitive" concepts may in fact be
naturally generated from more primitive ones. For example, most of the types of
properties a relation can possess fall here. 
Consider ⊗4Function⊗*: it is when every element's image has the very special
property "singleton". 
The idea of an ⊗4Inverse⊗* can be discovered from the primitive concept of
reversing the order of an ordered pair. This latter idea probably cannot be
synthesized from more basic ideas, hence ⊗4must⊗* be inserted by hand initially
(reversing a pair ⊗4can⊗* be derived from set theory, but only via intricate
encodings).
Consider ⊗4Surjection⊗*: it is the
coincidence of two sets: the range and the image. Consider ⊗4Injection⊗*: it is the
fact that the inverse has the interesting property "function". ⊗4Bijection⊗* is the
coincidence of the previous two, and has an interesting intuitive interpretation
(1-1 correspondent matching) which makes it worth keeping as a separate BEING.
These coincidences could easily be proposed, in some order,
and examined. Any which deserved to exist as separate named concepts would be
made into BEINGs, the rest forgotten. Such justifications might include special
simplifications, interesting new properties observed, a new way of intuitively
viewing the situation, etc. 

.SKIP TO COLUMN 1

.SSEC(Example 2: Effort of Noticing New Analogies)

The system will have a general strategy, located in the ORDERING part of the
BEING named ANY-β, which says that if no effort has been expended whatsoever to try
to find analogues of the current β, then there is a high interest in doing this
activity (about the same motivation as finding examples of it, 
if there are none yet).

This will lead to two distinct types of behavior. When the system is first started,
whichever BEING is chosen to be completed by COMPLETE, its Analogy subpart of its
Ties part will be blank, hence early on it will want to find analogies to itself
(it will trigger Analogize, with itself as the only known argument). The second
type of activity occurs when a new BEING is created by the system. It will usually
be worked on almost immediately, and after the highest-priority parts are filled in
(like intuition, definition, perhaps some examples and family ties) the above-mentioned
strategy will direct attention to finding analogies to it. 

When a β wants to find its analogues, Analogize will look within β's family, scanning
for another BEING which has one of: (i) syntactically similar definition,
(ii) intuition view which applies to the same real-world situation, (iii) syntactically
similar examples, (iv) similarities between the two BEINGs' Ties parts.

The initial flurry of analogy quests will number about  18,000
( = 5 families  x  30 β's/family  x 30 β's to interact with  x 4 part-pairings to examine)
Some of these will be precluded almost instantly, so a reasonable figure is about
three CPU hours of time expended, finding about 1,000 possible analogies in toto,
of which only about 100 will prove intersting upon careful examination and will be
made into new BEINGs. Another speedup will occur because many of the initially supplied
BEINGs will not have anything in their Examples or Ties parts to begin with,
so those matches will fail trivially.

The secondary process of analogizing, when a new Ties, Ex, Defn, or Intu part is
added, is of course only a matter of  30 ( = 1 same family  x  30 β's to consider 
x 1 same part as the new one) things to look at.

.SKIP TO COLUMN 1
.SSEC(Example 3: Filling in the Examples parts of Objects)

.TURN OFF "{}"

Let us now consider a fairly detailed example. What happens when the
system first starts up?  Each β will probably only have a few parts,
hence all will clamor for attention. The ORDERING part of ANY-β
indicates that after the definition and intutition, the next most
important part to fill in is Examples.  The reason is, both for
motivation and for later empirical evidence. 

.ONCE TURN ON "{}"
The environment function Complete takes over; see page {QP2}. The
numbers below refer to the steps listed there. 
The details of each access are omitted, for brevity.

1. Neither P nor B is known. Ask each β how relevant it is to the
current situation, CS, which at the moment is almost totally null.
Since most β's require ⊗4some⊗* constrained state of CS, in which
something is true or within certain bounds, there are only about
thirty responders (out of about 125 β's).  These include Structures
(who want examples), Actives (who want examples and ties),
Static-Metas (examples), and a couple Active-Metas (Guess,
Analogize)
who also want some examples.
The Time component of Analogize's Worth part is
incredibly low (since no arguments -- suggesions for either candidate
in the analogy -- have been proposed). Any of the Actives, and also
Guess, would first ensure that the Examples parts of the things it
deals with be filled in, hence we may as well assume that we are
filling in the Examples part of a Structure or a Static Meta.  The
latter category are not as easy, and are generally produced by an
Active Meta's direct command.  
Decide to work on structures.
Of all the structures, Set and Bag are
tied with the higest ORD value (based on their Worth parts).  The
list (Set Bag) is printed to the user, who then has a few seconds to
respond before the system begins working on one of them.  Say the user
doesn't care, and B is now determined to be Set. 
The next step is to choose P, the part of the Set BEING to work on now.
We collect all the
facts on Up↑*(Set).ORDERING, which means Set.Ordering, Structure.Ord,
Object.Ord, Anyβ.Ord. In this case, only the last of these is
nonempty. Each factor is evaluated, and Examples wins with a factor
of .6 on a 0 - 1 scale.   So P is chosen to be Examples.

2. Create a plan for filling in Set.Examples.  Collect any helpful
information from the following sources: Examples.Fillin (which
contains many things to try to get new examples), Set.Examples.Fillin
(nothing there), Structure.Examples.Fillin (which contains some
specialized hints: Convert other structures' Examples; make some of
the interestingness features present in up↑*(Set).Interest not just
desirable but actually required, thereby guaranteeing an
⊗4interesting⊗* example), Object.Examples.Fillin (empty),
Anyβ.Examples.Fillin (empty).  Finally, some additonal parts of the
Examples β might be relevant later on.  The plan is simple: try the
Examples.Fillin activities, then the second Structure.Examples.Fillin
activity, then check the results with the Examples.Check part. 

3/4. Try to instantiate specializations of Set.  There are none.
Fail. 

3/4. Try to instantiate definition(s) of Set.  A simple linear
analysis of the base step of the recursive definition yields the fact
that (CLASS ), called PHI, is an example of a set. Return (CLASS ). 

3/4. Try to instantiate and apply the intuition(s). The set intuition
requires a purpose (and often other sets); the intuition is not
designed to create sets from nothing for no purpose (if it were, this
would be highly suspect!). Thus this fails.

3/4. Try to invert the recursive definition, so it produces more
complicated examples.  In the current case, this is trivial. We
simply apply the algorithm: Take known sets and apply Set-insert to
them. To start out, we plug in the specific base-step set, namely
Phi. The result is (CLASS (CLASS ) ), usually written { {} }.  We
reapply the algorithm with one argument PHI, and get 
.BEGIN NOFILL WIDEN 1,7
[CLASS (CLASS (CLASS)) (CLASS )]; with both arguments equal to (CLASS (CLASS)), we obtain
[CLASS (CLASS (CLASS)) (CLASS (CLASS)) ]  =  { {{}}, {{}} }.
.END
There is
no reason just now to go on, since we have the algorithm, so we
return these few examples plus we also return the inverted recursive
definition. 

⊗7Aside: We have no way of knowing, though, whether this process
always gives new sets. That is, Phi might equal Setinsert((Class
Phi),(class Phi))=(Class (Class Phi) Phi). One would have to do
inference, using some foundation axiom, to prove that x can never be
an element of x, just to prove that we have three distinct sets here;
the actual proof that the chain [x↓n↓+↓1 = Set-insert(x↓n, x↓n)] never
repeats is not trivial unless the axiom is phrased in just the right
way. The fact that this arose out of inverting a recursive
definition, however, strongly suggests that this algorithm will in
fact yield an infinite number of distinct sets if there are
infinitely many.
An indirect proof could now be proposed, namely assume that A,B,...,Z are the only
distinct sets which can exist, and then derive a contradiction.⊗*

3/4. Tag the Examples parts of Set.Ops (namely Member, Containment, Get-Some-Member,
Equality, Set-insert, and Set-delete) as follows: Put a little note in
each of these parts, saying that Set.Examples contains some examples
of Domain elements for these operations. This will raise the level of
estimated worth of working on ⊗4their⊗* examples parts.  ⊗7This is
one small example of how tangential knowledge speeds up, rather than
slows down, the acquisition of new information.⊗*

3/4. Now we pass from the general Examples.Fillin strategies to the
Structure.Examples.Fillin strategies.  We must conjoin the Interest
properties as if they were requirements.  Set.Interest asks for the
elements of a set to be related by some other known, interesting
relation (besides being members of the same set).  Things which are
so related are located via their Ties parts, so the task is find some
Tied entities and make them the elements of a set. The first part is
done, say, by noticing the Ties part of Set itself, namely to other
structures, and the second part is done when Make-a-Set recognizes
its own relevance. The result is thus (CLASS Hist List Bag Oset Set
Ordered Pair). 
The ease with which this was done signals that it may be explosive, so we
don't pursue this method of set construction any further right now.

3/4. Structure.Interest says that the structure should be such that
certain interesting operations are doable to it efficently, without
going into detail about which operations.  The part Set.Operations
contains Member, Get-some-member, Subset, Equal, Setinsert,
Setdelete, and some invariance data.  After studying these
operations, it decides that PHI is the most efficient argument to
each and every one of them. This result, while trivial, is noticed as
a surprising (to the system) discovery, and may be sufficent to
ensure that PHI is made into a β itself, and its properties studied. 

5. The percentage of success factor in Set.Worth vector is
incremented (say from .9 to .91), its analogic utility factor
likewise (from .8 to .9).  There is not enough activation energy left
to pursue any more examples of Set just now. A marker is left here
indicating how much effort was spent, how the inverted recursive
definition can be used, and the hint of a conjecture about the
diversity of its results. 

1. Return and decide if Set is still the best β, and/or if Examples
is still the best part to concentrate upon. Probably, the Examples
part of Bag will be the highest priority part ot fill in at the
moment. In this manner, we may suppose in later examples that the
system has spent some time to collect examples of all the structures.

.SKIP TO COLUMN 1
.SSEC(Example 4: Considering New Compositions of Operations)

The activity for finding examples of Actives is similar
to finding examples of Sets, except that
Structure.Examples.Fillin is not relevant to Actives, and its place is taken
by Active.Examples.Fillin, which contains one
specific activity: after a new specialized
operator is found, go to the trouble of applying it to  a few examples
of its domain. This gets a bit intricate if, e.g., its domain is itself
a set of operators. The trickiest case, when the Active is the β named Compose, is
presented now to help clarify (??) all this "examples of examples of..."  confusion.


1. Complete wants to determine P and B. Again the same thirty or so β's respond,
as in example 3. This time, we assume that the static β's have their examples filled
in, likewise most of the Actives. Guess and Analogize still are low in their 
run-time worth components (due to unspecified arguments).  Suppose that no examples
are known for the Active BEING named Compose, and its Worth part lets it be chosen.
Ordering (of Any-β, actually) specifies that Examples should be filled in.

2. Must devise a plan for filling in the Examples part of the Compose BEING.
Much information exists under Examples.Fillin, and some also under
Active.Examples.Fillin. The latter indicates "Afterwards", so it is done only
after the Examples.Fillin strategies are exhausted. The final information 
recognized to be relevant
is present
in Compose.Algorithms, and is used when the terms of the definition get replaced by
specific examples of themselves.

3/4.  Specialize the definition of Compose.
Must find an ordered pair of operations (f,g),  with dom(f) ⊃ ran(g).
Access the Find.Algorithms part.  This says to consider K={set of known operators}.
Form the cross-product C=KxK. If there is some reasonable ordering on C, order it.
Pick an e in C, say e=(j,k). Check whether ran(k) ⊂ dom(j).
If so, apply Compose.Algorithm. In either case, you can continue by picking
another element of C, etc.

By applying the above algorithm, the system uncovers a wealth of possible compositions.
For example, αα ⊗7o⊗* Delete, where αα can be any one of: Insert, Delete, Convert,
Subst, Unite, Common-parts, Member, Contain. Some second-order of worth compositions
include compose*(delete,delete)  and also equal*(delete,delete). Some third-order ones
are compose*(delete,insert) and equal*(delete,insert).

Each example found (and there will be about 300 of them) is made into a BEING whose
Worth part indicates high interest but a short lifetime if nothing new and interesting
is found out about it (if no interesting Tie is discovered rapidly).  
This is done by
Active.Examples.Fillin, which also specifically calls for the investigation of the
specific Ties of the form: (input,output) satisfy some known interesting
relation; see if
the new operation is related to another by yet a third.
The 300 new Active β's are chosen one at a time, 
and their intuition and examples parts are filled out. 

Let us take a few examples (not all 300!!). Consider Insert o Delete. This means
take a structure S, produce all the pairs (e, result of deleting e from S), then
call Insert on each of these pairs, getting a list of new structures
characterized by  {S' | ∃e. S' = result of inserting e into the result of deleting
e from S}.  Another view is to be given a structure S and an element e, then
perform Delete(e,S), obtaining structure R, then perform the (explosive) operation
Insert(R), yielding all the pairs (f, result of inserting f into R).
.NOFILL
Some examples are found, say PHI → {(e,PHI)} → { {e} };
{f} → { (e, {f}), (f, PHI) } → { {e,f}, {f} };
(f) → { (e, (f)), (f, NIL) } → { (e,f), (f) };
(f,g) → { (e, (f,g)), (f, (g)), (g, (f)) }  →  { (e,f,g), (f,g), (g,f) }.
.FILL
Filling in the intuition part of Insert o Delete, we get the tight meshing of:
pull x out of container S and then drop it back in. This should indicate
that Insert*Delete may leave the original container unchanged. That would mean that
Insert o Delete (S) = {S}; a slight weakening would be the statement
"S is a member of Insert o Delete (S)". One potential exception is when you can't
pull the thing x out of S; when S is null or (more intelligently) when the entity
x is not already a member of S.
Either the intuition or the examples should indicate a conjecture of the form
.NOFILL
S non-null  ↔  S ⊗6ε⊗* Insert o Delete(S), or perhaps even the sophisticated one:
x is in set S  ↔  Insert(x,Delete(x,S))=S.
.FILL

Consider now the composition Member o Delete, which takes a structure S,
deletes some element x, then sees if x is a member of S. Notice that the range
is therefore {T,F}. From examples alone, the system should notice that all sets
and osets map into False, and some-but-not-all lists and bags map the same way.
The intuition should answer the riddle "how/when does a list/bag ⊗4not⊗* map into
False?" The answer is that there was more than one x in the original structure,
so there is still at least one left when you pluck out one x. One way to ensure
that this ⊗4will⊗* occur is to insert x twice into the  list or bag
before you try to apply this composition. The conjecture thus arrived at is:
Member*Delete*Insert*Insert(S) contains T iff S is a bag or a list, and otherwise
(for sets and osets) it is {False}.  The user may name this property "duplicity".

The third example of a composition we shall deal with explicitly here is the awful
one Assign*Never.  First we take an unquantified proposition P, then turn out
ordered pairs (x, ⊗6∀x.¬⊗*P(x)) for all possible variable names x. For each such
pair (x,Q), we then assign to the variable x the value Q.  This bizarre operation
thus could map P="x is in S" to the situation where y had the value "for all y,
x is never in S", where x had the value "⊗6∀x.¬xεS⊗*", etc. 
After much groping, this 
might lead to distinguishing the positive quantifiers (⊗6∀,∃⊗*) from the
negative quantifiers (never and not always).
The intuitions will positively rebel at this unnatural composition, and the β will
be allowed to die soon. All the other compositions of the form "Assign o <quantify>"
will be noticed as
similar to this dismal failure, hence the system will not even waste this much time
on them.
Notice that there is nothing wrong with glancing at this horrible composition.
It is only upon examination that the intuitions are asked to ripple outward, hopefully
towards each other, until they intersect in some image. In this case, they don't
meet in any reasonable time, which is the computational equivalent of saying that
their composition is aesthetically disgusting. The only "stupidity" would be to
notice this mismatch and ignore it;  the common brand of "ignorance" would be never
to uncover this intuitive incompatibility.

.SKIP TO COLUMN 1
.SSEC(Example 5: Proving a Conjecture)

Let us discuss the examination of the intuitively and empirically justified
conjecture first mentioned in example 4, on the last page:
For any non-null structure S, S is one  result of Insert*Delete(S).

The Test BEING grabs control, and since the conjecture is believed (intuitively
and empirically) he calls
on Prove. The first action under Prove.Algorithms is to clarify the 
existing informal justification.
The intuition was to break off a piece x from S and then glue it back into the
same place; to pull a thing out of a container and then throw it back in.
The examples  include many of each known type of structure.
Prove then asks which type of proving seems most relevant. Proving-⊗6∀⊗*'s is eager,
and narrowly wins over Cases and Natural Deduction. This is to no avail, for
Proving-⊗6∀⊗*'s immediately asks about Cases of structures anyway.
Structures.Specializations informs him that the types are Hist, List, Oset, Bag, Set.

Before working on separate case proofs, as much general-purpose proof as possible
should be firmed up. The intuition is asked to notice features about the element
which is deleted in the successful cases. It infers that the element was always a
member of the structure; it probably doesn't notice that it was the first member
in the ordered case (lists and osets and hists) and any member in the
unordered case (sets and bags). The second step, 
that of insertion, brings you back to
the original structure if you then glue it back in precisely the place you took it
from (ordered) or anywhere (non-ordered).
The Insertion β is asked where it puts the element, and its intuition replies that
it goes at the front of the structure, so that Some-member can grab it next.
The reasoning now is that if x is placed such that Some-member grabs it next,
and it had to be placed where it came from, then it had to come from the place which
Some-member would grab. That is, x had to be Some-member of S. The suggested proof is:
.NOFILL
x is Assigned the value Some-member(S).
S' is Assigned the value Delete(x,S).
S'' is Assigned the value Insert(x,S').
Claim S'' = S.
That is, S = Insert(Some-member(S), Delete(Some-member(S), S)).
.FILL
The general-structure axioms are insufficent to prove this, so we finally do break
the conjecture into cases, hopefully using the suggested proof as our model. For the
cases of Sets and Osets, this proof works fine. For the rest, however, the duplicity
simply confuses the issues and leads, e.g., to infinite chains of inductions which
simply don't get any easier. The core of this dilemma is the need to count the number
of occurrences of each element in a bag or list, which of course the system can't
do now. The two alteraatives are to defer this until later, but rely on it unless
proven otherwise, or to add new list/bag axiom(s) from which this conjecture 
could be deduced.  The former
is preferred, since it avoids interacting with the user,
and since Assuming is a way of "copping out".
So we postpone further attempts
at proving this until some new, powerful knowledge is gained relevant to
bags and lists.
A note is added in case any new bag or list axiom is later considered: its value is
boosted if it also helps prove this conjecture (in addition to its reason for
existing).

The next example goes into detail about a much more trivial conjecture.
.SKIP  TO COLUMN 1
.SSEC(Example 6: Formally Investigating an intuitively believed conjecture)

Note: It is difficult to find hard proofs at this low level.  
PHI=0={}=(CLASS )=empty set.
Below is what the user might see at near-maximal verbosity level.

.NOFILL

(1) Conjecture: The only relation from 0 to any set X is 0.

Test recognizes: conjecture 
     Intuitive Justification: Cannot seem to find any place for any arrow of the reln. to come
     from (i.e. can't draw arrow because can't choose an ele. from domain because there aren't any)
     Conclusion: since this is believed, we shall try to prove it, not disprove it.
     Access: A relation between A and B is a subset of A X B.
     Access: A X B is the set of all ordered pairs <a,b> such that a ⊗6ε⊗* A and b ⊗6ε⊗* B

Containment.Iden: To prove Any αα is β, consider any αα, show it's β.
     Consider any relation R: 0 → X.  Show it is 0.  Ask the β named PHI how to prove R=PHI.
     Answer: Show all subsets of 0 x X are 0; alternative: assume z is an ele, derive contradiction.
     Intuition: All subsets of a set are empty iff the set is empty. (Becomes a lemma.)
     Must show 0 x X = 0 for all sets X. This is intuitive.  (Becomes a lemma.) Done.

Prove: To prove p iff q, prove p implies q and q implies p.  To prove
     p implies q, assume p and the negation of q, and derive a contradiction.

     Now must prove two lemmas, by contradiction:
     (1) Say X is not empty but all its subsets are.  If X is not empty,
         there is some x ⊗6ε⊗* X.  If x ⊗6ε⊗* X then {x} ⊂ X. But {x} is not empty. Contradiction.
         Say X is empty but is has a non-empty subset Y.  If Y is non-
         empty, there is some y ⊗6ε⊗* Y.  By definition of subset, y ⊗6ε⊗* X.  Contradiction.

     (2) Access the definition of Cross-product to interpret 0xX. Any member z is of the
	 form (a,b), with a in 0 and b in X. But a can never be in 0. Contradiction.

Popping up, we discover that (1) is now proved.

Try to prove the converse of (1). Analogy with last proof (this will actually work) 

.GROUP
.SSEC(Other potentially productive examples)

The following might be interesting to actually simulate by hand before programming:

Discovering and developing a particular analogy.
Discovering and developing the idea of the transpose of a relation.
Working out a complicated inductive proof.
.APART
.FILL
.NSEC(BIBLIOGRAPHY)

.BEGIN FILL SINGLE SPACE  PREFACE 1 WIDEN 5,5 INDENT 0,6,0
.SSEC(Books and Memos)

Allendoerfer, Carl B., and Oakley, Cletis O., ⊗4Principles of
Mathematics⊗*, Third Edition, McGraw-Hill, New York, 1969.

Alexander, Stephen, ⊗4On the Fundamental Principles of Mathematics⊗*,
B. L. Hamlen, New Haven, 1849.

Aschenbrenner, Karl, ⊗4The Concepts of Value⊗*, D. Reidel Publishing
Company, Dordrecht, Holland, 1971.

Atkin, A. O. L., and Birch, B. J., eds., ⊗4Computers in Number Theory⊗*,
Proceedings of the 1969 SRCA Oxford Symposium, Academic Press, New York, 
1971.

Avey, Albert E., ⊗4The Function and Forms of Thought⊗*, Henry Holt and
Company, New York, 1927.

Badre, Nagib A., ⊗4Computer Learning From English Text⊗*, Memorandum
No. ERL-M372, Electronics Research Laboratory, UCB, December 20, 1972.
Also summarized in ⊗4CLET -- A Computer Program that Learns Arithmetic
from an Elementary Textbook⊗*, IBM Research Report RC 4235, February
21, 1973.

Bahm, A. J., ⊗4Types of Intuition⊗*, University of New Mexico Press,
Albuquerque, New Mexico, 1960.

Banks, J. Houston, ⊗4Elementary-School Mathematics⊗*, Allyn and Bacon,
Boston, 1966.

Berkeley, Edmund C., ⊗4A Guide to Mathematics for the Intelligent
Nonmathematician⊗*, Simon and Schuster, New York, 1966.

Berkeley, Hastings, ⊗4Mysticism in Modern Mathematics⊗*, Oxford U. Press,
London, 1910.

Beth, Evert W., and Piaget, Jean, ⊗4Mathematical Epistemology and
Psychology⊗*, Gordon and Breach, New York, 1966.

Black, Max, ⊗4Margins of Precision⊗*, Cornell University Press,
Ithaca, New York, 1970.

Blackburn, Simon, ⊗4Reason and Prediction⊗*, Cambridge University Press,
Cambridge, 1973.

Brotz, Douglas K., ⊗4Embedding Heuristic Problem Solving Methods in a
Mechanical Theorem Prover⊗*, dissertation published as Stanford Computer
Science Report STAN-CS-74-443, AUgust, 1974.

Bruner, Jerome S., Goodnow, J. J., and Austin, G. A., ⊗4A Study of
Thinking⊗*, Harvard Cognition Project, John Wiley & Sons,
New York, 1956.

Charosh, Mannis, ⊗4Mathematical Challenges⊗*, NCTM, Wahington, D.C., 1965.

Cohen, Paul J., ⊗4Set Theory and the Continuum Hypothesis⊗*,  W.A.Benjamin, Inc.,
New York, 1966.

Copeland, Richard W., ⊗4How Children Learn Mathematics⊗*, The MacMillan
Company, London, 1970.

Courant, Richard, and Robins, Herbert, ⊗4What is Mathematics⊗*, 
Oxford University Press, New York, 1941.

D'Augustine, Charles, ⊗4Multiple Methods of Teaching Mathematics in the
Elementary School⊗*, Harper & Row, New York, 1968.

Dornbusch, Sanford, and Scott, ⊗4Evaluation and the Exercise of Authority⊗*,
Jossey-Bass, San Francisco, 1975.

Douglas, Mary (ed.), ⊗4Rules and Meanings⊗*, Penguin Education,
Baltimore, Md., 1973.

Dowdy, S. M., ⊗4Mathematics: Art and Science⊗*, John Wiley & Sons, NY, 1971.

Dubin, Robert, ⊗4Theory Building⊗*, The Free Press, New York,  1969.

Dubs, Homer H., ⊗4Rational Induction⊗*, U. of Chicago Press, Chicago, 1930.

Dudley, Underwood, ⊗4Elementary Number Theory⊗*, W. H. Freeman and
Company, San Francisco, 1969.

Eynden, Charles Vanden, ⊗4Number Theory: An Introduction to Proof⊗*, 
International Textbook Comapny, Scranton, Pennsylvania, 1970.

Fuller, R. Buckminster, ⊗4Intuition⊗*, Doubleday, Garden City, New York,
1972.

GCMP, ⊗4Key Topics in Mathematics⊗*, Science Research Associates,
Palo Alto, 1965.

Goldstein, Ira, ⊗4Elementary Geometry Theorem Proving⊗*, MIT AI Memo 280,
April, 1973.

Goodstein, R. L., ⊗4Fundamental Concepts of Mathematics⊗*, Pergamon Press, 
New York, 1962.

Goodstein, R. L., ⊗4Recursive Number Theory⊗*, North-Holland Publishing Co.,
Amsterdam, 1964.

Green, Waldinger, Barstow, Elschlager, Lenat, McCune, Shaw, and Steinberg,
⊗4Progress Report on Program-Understanding Systems⊗*, Memo AIM-240,
CS Report STAN-CS-74-444,Artificial Intelligence Laboratory,
Stanford University, August, 1974.

Hadamard, Jaques, ⊗4The Psychology of Invention in the Mathematical
Field⊗*, Dover Publications, New York, 1945.

Halmos, Paul R., ⊗4Naive Set Theory⊗*, D. Van Nostrand Co., 
Princeton, 1960.

Hanson, Norwood R., ⊗4Perception and Discovery⊗*, Freeman, Cooper & Co.,
San Francisco, 1969.

Hartman, Robert S., ⊗4The Structure of Value: Foundations of Scientific
Axiology⊗*, Southern Illinois University Press, Carbondale, Ill., 1967.

Hempel, Carl G., ⊗4Fundamentals of Concept Formation in Empirical
Science⊗*, University of Chicago Press, Chicago, 1952.

Hibben, John Grier, ⊗4Inductive Logic⊗*, Charles Scribner's Sons,
New York, 1896.

Hilpinen, Risto, ⊗4Rules of Acceptance and Inductive Logic⊗*, Acta
Philosophica Fennica, Fasc. 22, North-Holland Publishing Company,
Amsterdam, 1968.

Hintikka, Jaako, ⊗4Knowledge and Belief⊗*, Cornell U. Press, Ithaca, NY, 1962.

Hintikka, Jaako, and Suppes, Patrick (eds.), ⊗4Aspects of Inductive
Logic⊗*, North-Holland Publishing Company, Amsterdam, 1966.

Jouvenal, Bertrand de, ⊗4The Art of Conjecture⊗*, Basic Books, Inc.,
New York, 1967.

Kershner, R.B., and L.R.Wilcox, ⊗4The Anatomy of Mathematics⊗*, The Ronald
Press Company, New York, 1950.

Klauder, Francis J., ⊗4The Wonder of Intelligence⊗*, Christopher
Publishing House, North QUincy, Mass., 1973.

Klerner, M., and J. Reinfeld, eds., ⊗4Interactive Systems for Applied Mathematics⊗*,
ACM Symposium, held in Washington, D.C., AUgust, 1967. Academic Press, NY, 1968.

Kline, M. (ed), ⊗4Mathematics in the Modern World: Readings from Scientific
American⊗*, W.H.Freeman and Co., San Francisco, 1968.

Kling, Robert Elliot, ⊗4Reasoning by Analogy with Applications to Heuristic
Problem Solving: A Case Study⊗*, Stanford Artificial Intelligence Project
Memo AIM-147, CS Department report CS-216, August, 1971.

Korner, Stephan, ⊗4Conceptual Thinking⊗*, Dover Publications, New York,
1959.

Krivine, Jean-Louis, ⊗4Introduction to Axiomatic Set Theory⊗*, Humanities Press,
New York, 1971.

Kubinski, Tadeusz, ⊗4On Structurality of Rules of Inference⊗*, Prace
Wroclawskiego Towarzystwa Naukowego, Seria A, Nr. 107, Worclaw, 
Poland, 1965.

Lakatos, Imre (ed.), ⊗4The Problem of Inductive Logic⊗*, North-Holland 
Publishing Co., Amsterdam, 1968.

Lamon, William E., ⊗4Learning and the Nature of Mathematiccs⊗*, Science
Research Associates, Palo Alto, 1972.

Lang, Serge, ⊗4Algebra⊗*, Addison-Wesley, Menlo Park, 1971.

Lefrancois, Guy R., ⊗4Psychological Theories and Human Learning⊗*, 1972.

Le Lionnais, F., ⊗4Great Currents of Mathematical Thought⊗*, Dover
Publications, New York, 1971.

Margenau, Henry, ⊗4Integrative Principles of Modern Thought⊗*, Gordon
and Breach, New York, 1972.

Martin, James, ⊗4Design of Man-Computer Dialogues⊗*, Prentice-Hall, Inc.,
Englewood Cliffs, N. J., 1973.

Martin, R. M., ⊗4Toward a Systematic Pragmatics⊗*, North Holland Publishing
Company, Amsterdam, 1959.

Mendelson, Elliott, ⊗4Introduction to Mathematical Logic⊗*, Van Nostrand Reinhold
Company, New York, 1964.

Meyer, Jerome S., ⊗4Fun With Mathematics⊗*, Fawcett Publications,
Greenwich, Connecticut, 1952.

Mirsky, L., ⊗4Studies in Pure Mathematics⊗*, Academic Press, New
York, 1971.

Moore, Robert C., ⊗4D-SCRIPT: A Computational Theory of Descriptions⊗*,
MIT AI Memo 278, February, 1973.

National Council of Teachers of Mathematics, ⊗4The Growth of Mathematical
Ideas⊗*, 24th yearbook, NCTM, Washington, D.C., 1959.

Newell, Allen, and Simon, Herbert, ⊗4Human Problem Solving⊗*, 1972.

Nevins, Arthur J., ⊗4A Human Oriented Logic for Automatic Theorem
Proving⊗*, MIT AI Memo 268, October, 1972.

Niven, Ivan, and Zuckerman, Herbert, ⊗4An Introduction to the Theory
of Numbers⊗*, John Wiley & Sons, Inc., New York, 1960.

Olson, Robert G., ⊗4Meaning and Argument⊗*, Harcourt, Brace & World,
New York, 1969.

Ore, Oystein, ⊗4Number Theory and its History⊗*, McGraw-Hill, 
New York, 1948.

Pietarinen, Juhani, ⊗4Lawlikeness, Analogy, and Inductive Logic⊗*,
North-Holland, Amsterdam, published as v. 26 of the series
Acta Philosophica Fennica (J. Hintikka, ed.), 1972.

Poincare', Henri, ⊗4The Foundations of Science: Science and Hypothesis,
The Value of Science, Science and Method⊗*, The Science Press, New York,
1929. 
.COMMENT main library, 501  P751F, copy 4;

Polya, George, ⊗4Mathematics and Plausible Reasoning⊗*, Princeton
University Press, Princeton, Vol. 1, 1954;  Vol. 2, 1954.

Polya, George, ⊗4How To Solve It⊗*, Second Edition, Doubleday Anchor Books, 
Garden City, New York, 1957.

Polya, George, ⊗4Mathematical Discovery⊗*, John Wiley & Sons,
New York, Vol. 1, 1962; Vol. 2, 1965.

Richardson, Robert P., and Edward H. Landis, ⊗4Fundamental Conceptions of
Modern Mathematics⊗*, The Open Court Publishing Company, Chicago, 1916.

Rosskopf, Steffe, Taback  (eds.), ⊗4Piagetian Cognitive-
Development Research and Mathematical Education⊗*,
National Council of Teachers of Mathematics, New York, 1971.

Rulison, Jeff, and... ⊗4QA4, A Procedural Frob...⊗*,
Technical Note..., Artificial Intelligence Center, SRI, Menlo
Park, California, ..., 1973.

Saaty, Thomas L., and Weyl, F. Joachim (eds.), ⊗4The Spirit and the Uses
of the Mathematical Sciences⊗*, McGraw-Hill Book Company, New York, 1969.

Schminke, C. W., and Arnold, William R., eds., ⊗4Mathematics is a Verb⊗*,
The Dryden Press, Hinsdale, Illinois, 1971.

Singh, Jagjit, ⊗4Great Ideas of Modern Mathematics⊗*, Dover Publications,
New York, 1959.

Skemp, Richard R., ⊗4The Psychology of Learning Mathematics⊗*, 
Penguin Books, Ltd., Middlesex, England, 1971.

Slocum, Jonathan, ⊗4The Graph-Processing Language GROPE⊗*, U. Texas at Austin,
Technical Report NL-22, August, 1974.

Smith, Nancy Woodland, ⊗4A Question-Answering System for Elementary Mathematics⊗*,
Stanford Institute for Mathematical Studies in the Social Sciences, Technical
Report 227, April 19, 1974.

Smith, R.L., Nancy Smith, and F.L. Rawson, ⊗4CONSTRUCT: In Search of a Theory of
Meaning⊗*, Stanford IMSSS Technical Report 238, October 25, 1974.

Stein, Sherman K., ⊗4Mathematics: The Man-Made Universe: An Introduction
to the Spirit of Mathematics⊗*, Second Edition, W. H. Freeman and 
Company, San Francisco,  1969.

Stewart, B. M., ⊗4Theory of Numbers⊗*, The MacMillan Co., New York, 1952.

Stokes, C. Newton, ⊗4Teaching the Meanings of Arithmetic⊗*, 
Appleton-Century-Crofts, New York, 1951.

Suppes, Patrick, ⊗4A Probabilistic Theory 
of Causality⊗*, Acta Philosophica Fennica,
Fasc. 24, North-Holland Publishing Company, Amsterdam, 1970.

Teitelman, Warren, ⊗4INTERLISP Reference
Manual⊗*, XEROX PARC, 1974.

Venn, John, ⊗4The Principles of Empirical or Inductive Logic⊗*,
MacMillan and Co., London, 1889.

Waismann, Friedrich, ⊗4Introduction to Mathematical Thinking⊗*, 
Frederick Ungar Publishing Co., New York, 1951.

Wickelgren, Wayne A., ⊗4How to Solve Problems: Elements of a Theory of Problems
and Problem Solving⊗*, W. H. Freeman and Co., Sanf Francisco, 1974.

Wilder, Raymond L., ⊗4Evolution of Mathematical Concepts⊗*, John Wiley & Sons,
Inc., NY, 1968.

Winston, P., (ed.),
"New Progress in Artificial Intelligence",
⊗4MIT AI Lab Memo AI-TR-310⊗*, June, 1974. 
Good summaries of work on Frames,
Demons, Hacker, Heterarchy, Dialogue, and Belief.

Wittner, George E., ⊗4The Structure of Mathematics⊗*, Xerox College Publishing,
Lexington, Mass, 1972.

Wright, Georg H. von, ⊗4A Treatise on Induction and Probability⊗*,
Routledge and Kegan Paul, London, 1951.

.SKIP 3
.SSEC(Articles)

Amarel, Saul, ⊗4On Representations of Problems of Reasoning about
Actions⊗*, Machine Intelligence 3, 1968, pp. 131-171.

Bledsoe, W. W., ⊗4Splitting and Reduction Heuristics in Automatic
Theorem Proving⊗*, Artificial Intelligence 2, 1971, pp. 55-77.

Bledsoe and Bruell, Peter, ⊗4A Man-Machine Theorem-Proving System⊗*,
Artificial Intelligence 5, 1974, 51-72.

Bourbaki, Nicholas, ⊗4The Architechture of Mathematics⊗*, American Mathematics
Monthly, v. 57, pp. 221-232, Published by the MAA, Albany, NY, 1950.

Boyer, Robert S., and J. S. Moore, ⊗4Proving Theorems about LISP Functions⊗*,
JACM, V. 22, No. 1, January, 1975, pp. 129-144.

Bruijn, N. G. de, ⊗4AUTOMATH, a language for mathematics⊗*, Notes taken by
Barry Fawcett, of Lecures given at the Seminare de mathematiques Superieurs,
University de Montreal, June, 1971. Stanford University Computer Science
Library report number is 005913.

Buchanan, Feigenbaum, and Sridharan, ⊗4Heuristic Theory Formation⊗*,
Machine Intelligence 7, 1972, pp. 267-...

Bundy, Alan, ⊗4Doing Arithmetic with Diagrams⊗*, 3rd IJCAI, 
1973, pp. 130-138.

Daalen, D. T. van, ⊗4A Description of AUTOMATH and some aspects of its language
theory⊗*, in the Proceedings of the SYmposium on APL, Paris, December, 1973,
P. Braffort (ed). This volume also contains other, more detailed articles on this
project, by  Bert Jutting and Ids Zanlevan.

Engelman, C., ⊗4MATHLAB: A Program for On-Line Assistance in Symbolic Computation⊗*,
in Proceedings of the FJCC, Volume 2, Spartan Books, 1965.

Engelman, C., ⊗4MATHLAB '68⊗*, in IFIP, Edinburgh, 1968.

Gardner, Martin, ⊗4Mathematical Games⊗*, Scientific American, numerous columns,
including especially:  February, 1975.

Goldstine, Herman H., and J. von Neumann, ⊗4On the Principles of Large Scale
Computing Machines,⊗* pages 1:33 of Volumne 5 of A. H. Taub (ed), ⊗4The
Collected Works of John von Neumann⊗*, Pergamon Press, NY, 1963.

Guard, J. R., et al., ⊗4Semi-Automated Mathematics⊗*, JACM 16,
January, 1969, pp. 49-62.

Halmos, Paul R., ⊗4Innovation in Mathematics⊗*, in
Kline, M. (ed), ⊗4Mathematics in the Modern World: Readings from Scientific
American⊗*, W.H.Freeman and Co., San Francisco, 1968, pp. 6-13. Originally in
Scientific American, September, 1958.

Hasse, H., ⊗4Mathemakik als Wissenschaft, Kunst und Macht⊗*,
(Mathematics as Science, Art, and Power), Baden-Badeb, 1952.

Hewitt, Carl, ⊗4A Universal Modular ACTOR Formalism for
Artificial Intelligence⊗*, Third International Joint Conference on
Artificial Intelligence,
1973, pp. 235-245.

Menges, Gunter, ⊗4Inference and Decision⊗*, 
A Volume in ⊗4Selecta Statistica Canadiana⊗*,
John Wiley & Sons, New York,  1973, pp. 1-16.

Kling, Robert E., ⊗4A Paradigm for Reasoning by Analogy⊗*,
Artificial Intelligence 2, 1971, pp. 147-178.

Knuth,Donald E., ⊗4Ancient Babylonian Algorithms⊗*,
CACM 15, July, 1972, pp. 671-677.

Lee, Richard C. T., ⊗4Fuzzy Logic and the Resolution Principle⊗*,
JACM 19, January, 1972, pp. 109-119.

Lenat, D., ⊗4BEINGs: Knowledge as Interacting Experts⊗*, 4th IJCAI, 1975.

McCarthy, John, and Hayes, Patrick, ⊗4Some Philosophical Problems
from the Standpoint of Artificial Intelligence⊗*, Machine Intelligence
4, 1969, pp. 463-502.

Martin, W., and Fateman, R., ⊗4The MACSYMA System⊗*, Second
Symposium on Symbolic and Algebraic Manipulation, 1971, pp. 59-75.

Minsky, Marvin, ⊗4Frames⊗*, in ⊗4Psychology of Computer
Vision⊗*, 1974.

Moore, J., and Newell, ⊗4How Can Merlin Understand?⊗*, Carnegie-Mellon University
Department of Computer Science "preprint", November 15, 1973.

Neumann, J. von, ⊗4The Mathematician⊗*, in R.B. Heywood (ed), ⊗4The Works
of the Mind⊗*, U. Chicago Press, pp. 180-196, 1947.

Neumann, J. von, ⊗4The Computer and the Brain⊗*, Silliman Lectures, Yale U. Press,
1958.

Pager, David, ⊗4A Proposal for a Computer-based Interactive Scientific
Community⊗*, CACM 15, February, 1972, pp. 71-75.

Pager, David, ⊗4On the Problem of Communicating Complex Information⊗*,
CACM 16, May, 1973, pp. 275-281.

Sloman, Aaron, ⊗4Interactions Between Philosophy and Artificial 
Intelligence: The Role of Intuition and Non-Logical Reasoning in
Intelligence⊗*, Artificial Intelligence 2, 1971, pp. 209-225.

Sloman, Aaron, ⊗4On Learning about Numbers⊗*,...

Winston, Patrick, ⊗4Learning Structural Descriptions
from Examples⊗*, Ph.D. thesis, Dept. of Electrical Engineering,
TR-76, Project MAC, TR-231, MIT AI Lab, September, 1970.

.END
.PORTION CONTENTS
.NOFILL
.PREFACE 20 MILLS
.EVERY FOOTING(,,)
.EVERY HEADING(,,)
.GROUP SKIP 2
.ONCE CENTER
@5↓_TABLE OF CONTENTS_↓⊗*
.TURN ON "{∞→"
.NARROW 5,5


@2↓_TOPIC_↓ ∞ →↓_PAGE_↓@*
.SKIP 2
.RECEIVE